Jan 27 07:46:57 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 27 07:46:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 27 07:46:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 07:46:57 localhost kernel: BIOS-provided physical RAM map:
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 27 07:46:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 27 07:46:57 localhost kernel: NX (Execute Disable) protection: active
Jan 27 07:46:57 localhost kernel: APIC: Static calls initialized
Jan 27 07:46:57 localhost kernel: SMBIOS 2.8 present.
Jan 27 07:46:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 27 07:46:57 localhost kernel: Hypervisor detected: KVM
Jan 27 07:46:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 27 07:46:57 localhost kernel: kvm-clock: using sched offset of 7333241917 cycles
Jan 27 07:46:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 27 07:46:57 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 27 07:46:57 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 27 07:46:57 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 27 07:46:57 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 27 07:46:57 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 27 07:46:57 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 27 07:46:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 27 07:46:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 27 07:46:57 localhost kernel: Using GB pages for direct mapping
Jan 27 07:46:57 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 27 07:46:57 localhost kernel: ACPI: Early table checksum verification disabled
Jan 27 07:46:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 27 07:46:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:46:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:46:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:46:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 27 07:46:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:46:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:46:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 27 07:46:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 27 07:46:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 27 07:46:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 27 07:46:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 27 07:46:57 localhost kernel: No NUMA configuration found
Jan 27 07:46:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 27 07:46:57 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 27 07:46:57 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 27 07:46:57 localhost kernel: Zone ranges:
Jan 27 07:46:57 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 27 07:46:57 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 27 07:46:57 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 07:46:57 localhost kernel:   Device   empty
Jan 27 07:46:57 localhost kernel: Movable zone start for each node
Jan 27 07:46:57 localhost kernel: Early memory node ranges
Jan 27 07:46:57 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 27 07:46:57 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 27 07:46:57 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 07:46:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 27 07:46:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 27 07:46:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 27 07:46:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 27 07:46:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 27 07:46:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 27 07:46:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 27 07:46:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 27 07:46:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 27 07:46:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 27 07:46:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 27 07:46:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 27 07:46:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 27 07:46:57 localhost kernel: TSC deadline timer available
Jan 27 07:46:57 localhost kernel: CPU topo: Max. logical packages:   8
Jan 27 07:46:57 localhost kernel: CPU topo: Max. logical dies:       8
Jan 27 07:46:57 localhost kernel: CPU topo: Max. dies per package:   1
Jan 27 07:46:57 localhost kernel: CPU topo: Max. threads per core:   1
Jan 27 07:46:57 localhost kernel: CPU topo: Num. cores per package:     1
Jan 27 07:46:57 localhost kernel: CPU topo: Num. threads per package:   1
Jan 27 07:46:57 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 27 07:46:57 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 27 07:46:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 27 07:46:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 27 07:46:57 localhost kernel: Booting paravirtualized kernel on KVM
Jan 27 07:46:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 27 07:46:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 27 07:46:57 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 27 07:46:57 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 27 07:46:57 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 27 07:46:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 27 07:46:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 07:46:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 27 07:46:57 localhost kernel: random: crng init done
Jan 27 07:46:57 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 27 07:46:57 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 27 07:46:57 localhost kernel: Fallback order for Node 0: 0 
Jan 27 07:46:57 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 27 07:46:57 localhost kernel: Policy zone: Normal
Jan 27 07:46:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 27 07:46:57 localhost kernel: software IO TLB: area num 8.
Jan 27 07:46:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 27 07:46:57 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 27 07:46:57 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 27 07:46:57 localhost kernel: Dynamic Preempt: voluntary
Jan 27 07:46:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 27 07:46:57 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 27 07:46:57 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 27 07:46:57 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 27 07:46:57 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 27 07:46:57 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 27 07:46:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 27 07:46:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 27 07:46:57 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 07:46:57 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 07:46:57 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 07:46:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 27 07:46:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 27 07:46:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 27 07:46:57 localhost kernel: Console: colour VGA+ 80x25
Jan 27 07:46:57 localhost kernel: printk: console [ttyS0] enabled
Jan 27 07:46:57 localhost kernel: ACPI: Core revision 20230331
Jan 27 07:46:57 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 27 07:46:57 localhost kernel: x2apic enabled
Jan 27 07:46:57 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 27 07:46:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 27 07:46:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 27 07:46:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 27 07:46:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 27 07:46:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 27 07:46:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 27 07:46:57 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 27 07:46:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 27 07:46:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 27 07:46:57 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 27 07:46:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 27 07:46:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 27 07:46:57 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 27 07:46:57 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 27 07:46:57 localhost kernel: x86/bugs: return thunk changed
Jan 27 07:46:57 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 27 07:46:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 27 07:46:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 27 07:46:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 27 07:46:57 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 27 07:46:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 27 07:46:57 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 27 07:46:57 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 27 07:46:57 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 27 07:46:57 localhost kernel: landlock: Up and running.
Jan 27 07:46:57 localhost kernel: Yama: becoming mindful.
Jan 27 07:46:57 localhost kernel: SELinux:  Initializing.
Jan 27 07:46:57 localhost kernel: LSM support for eBPF active
Jan 27 07:46:57 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 07:46:57 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 07:46:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 27 07:46:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 27 07:46:57 localhost kernel: ... version:                0
Jan 27 07:46:57 localhost kernel: ... bit width:              48
Jan 27 07:46:57 localhost kernel: ... generic registers:      6
Jan 27 07:46:57 localhost kernel: ... value mask:             0000ffffffffffff
Jan 27 07:46:57 localhost kernel: ... max period:             00007fffffffffff
Jan 27 07:46:57 localhost kernel: ... fixed-purpose events:   0
Jan 27 07:46:57 localhost kernel: ... event mask:             000000000000003f
Jan 27 07:46:57 localhost kernel: signal: max sigframe size: 1776
Jan 27 07:46:57 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 27 07:46:57 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 27 07:46:57 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 27 07:46:57 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 27 07:46:57 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 27 07:46:57 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 27 07:46:57 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 27 07:46:57 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 27 07:46:57 localhost kernel: Memory: 7763596K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 27 07:46:57 localhost kernel: devtmpfs: initialized
Jan 27 07:46:57 localhost kernel: x86/mm: Memory block size: 128MB
Jan 27 07:46:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 27 07:46:57 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 27 07:46:57 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 27 07:46:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 27 07:46:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 27 07:46:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 27 07:46:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 27 07:46:57 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 27 07:46:57 localhost kernel: audit: type=2000 audit(1769500016.456:1): state=initialized audit_enabled=0 res=1
Jan 27 07:46:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 27 07:46:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 27 07:46:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 27 07:46:57 localhost kernel: cpuidle: using governor menu
Jan 27 07:46:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 27 07:46:57 localhost kernel: PCI: Using configuration type 1 for base access
Jan 27 07:46:57 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 27 07:46:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 27 07:46:57 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 27 07:46:57 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 27 07:46:57 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 27 07:46:57 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 27 07:46:57 localhost kernel: Demotion targets for Node 0: null
Jan 27 07:46:57 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 27 07:46:57 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 27 07:46:57 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 27 07:46:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 27 07:46:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 27 07:46:57 localhost kernel: ACPI: Interpreter enabled
Jan 27 07:46:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 27 07:46:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 27 07:46:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 27 07:46:57 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 27 07:46:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 27 07:46:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 27 07:46:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [3] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [4] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [5] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [6] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [7] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [8] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [9] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [10] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [11] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [12] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [13] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [14] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [15] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [16] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [17] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [18] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [19] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [20] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [21] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [22] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [23] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [24] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [25] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [26] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [27] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [28] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [29] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [30] registered
Jan 27 07:46:57 localhost kernel: acpiphp: Slot [31] registered
Jan 27 07:46:57 localhost kernel: PCI host bridge to bus 0000:00
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 27 07:46:57 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 27 07:46:57 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 27 07:46:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 07:46:57 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 27 07:46:57 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 27 07:46:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 27 07:46:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 27 07:46:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 27 07:46:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 27 07:46:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 27 07:46:57 localhost kernel: iommu: Default domain type: Translated
Jan 27 07:46:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 27 07:46:57 localhost kernel: SCSI subsystem initialized
Jan 27 07:46:57 localhost kernel: ACPI: bus type USB registered
Jan 27 07:46:57 localhost kernel: usbcore: registered new interface driver usbfs
Jan 27 07:46:57 localhost kernel: usbcore: registered new interface driver hub
Jan 27 07:46:57 localhost kernel: usbcore: registered new device driver usb
Jan 27 07:46:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 27 07:46:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 27 07:46:57 localhost kernel: PTP clock support registered
Jan 27 07:46:57 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 27 07:46:57 localhost kernel: NetLabel: Initializing
Jan 27 07:46:57 localhost kernel: NetLabel:  domain hash size = 128
Jan 27 07:46:57 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 27 07:46:57 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 27 07:46:57 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 27 07:46:57 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 27 07:46:57 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 27 07:46:57 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 27 07:46:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 27 07:46:57 localhost kernel: vgaarb: loaded
Jan 27 07:46:57 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 27 07:46:57 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 27 07:46:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 27 07:46:57 localhost kernel: pnp: PnP ACPI init
Jan 27 07:46:57 localhost kernel: pnp 00:03: [dma 2]
Jan 27 07:46:57 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 27 07:46:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 27 07:46:57 localhost kernel: NET: Registered PF_INET protocol family
Jan 27 07:46:57 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 27 07:46:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 27 07:46:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 27 07:46:57 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 27 07:46:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 27 07:46:57 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 27 07:46:57 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 27 07:46:57 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 07:46:57 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 07:46:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 27 07:46:57 localhost kernel: NET: Registered PF_XDP protocol family
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 27 07:46:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 27 07:46:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 27 07:46:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 27 07:46:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 83595 usecs
Jan 27 07:46:57 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 27 07:46:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 27 07:46:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 27 07:46:57 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 27 07:46:57 localhost kernel: ACPI: bus type thunderbolt registered
Jan 27 07:46:57 localhost kernel: Initialise system trusted keyrings
Jan 27 07:46:57 localhost kernel: Key type blacklist registered
Jan 27 07:46:57 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 27 07:46:57 localhost kernel: zbud: loaded
Jan 27 07:46:57 localhost kernel: integrity: Platform Keyring initialized
Jan 27 07:46:57 localhost kernel: integrity: Machine keyring initialized
Jan 27 07:46:57 localhost kernel: Freeing initrd memory: 87956K
Jan 27 07:46:57 localhost kernel: NET: Registered PF_ALG protocol family
Jan 27 07:46:57 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 27 07:46:57 localhost kernel: Key type asymmetric registered
Jan 27 07:46:57 localhost kernel: Asymmetric key parser 'x509' registered
Jan 27 07:46:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 27 07:46:57 localhost kernel: io scheduler mq-deadline registered
Jan 27 07:46:57 localhost kernel: io scheduler kyber registered
Jan 27 07:46:57 localhost kernel: io scheduler bfq registered
Jan 27 07:46:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 27 07:46:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 27 07:46:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 27 07:46:57 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 27 07:46:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 27 07:46:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 27 07:46:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 27 07:46:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 27 07:46:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 27 07:46:57 localhost kernel: Non-volatile memory driver v1.3
Jan 27 07:46:57 localhost kernel: rdac: device handler registered
Jan 27 07:46:57 localhost kernel: hp_sw: device handler registered
Jan 27 07:46:57 localhost kernel: emc: device handler registered
Jan 27 07:46:57 localhost kernel: alua: device handler registered
Jan 27 07:46:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 27 07:46:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 27 07:46:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 27 07:46:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 27 07:46:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 27 07:46:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 27 07:46:57 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 27 07:46:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 27 07:46:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 27 07:46:57 localhost kernel: hub 1-0:1.0: USB hub found
Jan 27 07:46:57 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 27 07:46:57 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 27 07:46:57 localhost kernel: usbserial: USB Serial support registered for generic
Jan 27 07:46:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 27 07:46:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 27 07:46:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 27 07:46:57 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 27 07:46:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 27 07:46:57 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 27 07:46:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T07:46:56 UTC (1769500016)
Jan 27 07:46:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 27 07:46:57 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 27 07:46:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 27 07:46:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 27 07:46:57 localhost kernel: usbcore: registered new interface driver usbhid
Jan 27 07:46:57 localhost kernel: usbhid: USB HID core driver
Jan 27 07:46:57 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 27 07:46:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 27 07:46:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 27 07:46:57 localhost kernel: Initializing XFRM netlink socket
Jan 27 07:46:57 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 27 07:46:57 localhost kernel: Segment Routing with IPv6
Jan 27 07:46:57 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 27 07:46:57 localhost kernel: mpls_gso: MPLS GSO support
Jan 27 07:46:57 localhost kernel: IPI shorthand broadcast: enabled
Jan 27 07:46:57 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 27 07:46:57 localhost kernel: AES CTR mode by8 optimization enabled
Jan 27 07:46:57 localhost kernel: sched_clock: Marking stable (1241003014, 146999193)->(1491399423, -103397216)
Jan 27 07:46:57 localhost kernel: registered taskstats version 1
Jan 27 07:46:57 localhost kernel: Loading compiled-in X.509 certificates
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 27 07:46:57 localhost kernel: Demotion targets for Node 0: null
Jan 27 07:46:57 localhost kernel: page_owner is disabled
Jan 27 07:46:57 localhost kernel: Key type .fscrypt registered
Jan 27 07:46:57 localhost kernel: Key type fscrypt-provisioning registered
Jan 27 07:46:57 localhost kernel: Key type big_key registered
Jan 27 07:46:57 localhost kernel: Key type encrypted registered
Jan 27 07:46:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 27 07:46:57 localhost kernel: Loading compiled-in module X.509 certificates
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 07:46:57 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 27 07:46:57 localhost kernel: ima: No architecture policies found
Jan 27 07:46:57 localhost kernel: evm: Initialising EVM extended attributes:
Jan 27 07:46:57 localhost kernel: evm: security.selinux
Jan 27 07:46:57 localhost kernel: evm: security.SMACK64 (disabled)
Jan 27 07:46:57 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 27 07:46:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 27 07:46:57 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 27 07:46:57 localhost kernel: evm: security.apparmor (disabled)
Jan 27 07:46:57 localhost kernel: evm: security.ima
Jan 27 07:46:57 localhost kernel: evm: security.capability
Jan 27 07:46:57 localhost kernel: evm: HMAC attrs: 0x1
Jan 27 07:46:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 27 07:46:57 localhost kernel: Running certificate verification RSA selftest
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 27 07:46:57 localhost kernel: Running certificate verification ECDSA selftest
Jan 27 07:46:57 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 27 07:46:57 localhost kernel: clk: Disabling unused clocks
Jan 27 07:46:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 27 07:46:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 27 07:46:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 27 07:46:57 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 27 07:46:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 27 07:46:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 27 07:46:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 27 07:46:57 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 27 07:46:57 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 27 07:46:57 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 27 07:46:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 27 07:46:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 27 07:46:57 localhost kernel: Run /init as init process
Jan 27 07:46:57 localhost kernel:   with arguments:
Jan 27 07:46:57 localhost kernel:     /init
Jan 27 07:46:57 localhost kernel:   with environment:
Jan 27 07:46:57 localhost kernel:     HOME=/
Jan 27 07:46:57 localhost kernel:     TERM=linux
Jan 27 07:46:57 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 27 07:46:57 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 07:46:57 localhost systemd[1]: Detected virtualization kvm.
Jan 27 07:46:57 localhost systemd[1]: Detected architecture x86-64.
Jan 27 07:46:57 localhost systemd[1]: Running in initrd.
Jan 27 07:46:57 localhost systemd[1]: No hostname configured, using default hostname.
Jan 27 07:46:57 localhost systemd[1]: Hostname set to <localhost>.
Jan 27 07:46:57 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 27 07:46:57 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 27 07:46:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 07:46:57 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 07:46:57 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 27 07:46:57 localhost systemd[1]: Reached target Local File Systems.
Jan 27 07:46:57 localhost systemd[1]: Reached target Path Units.
Jan 27 07:46:57 localhost systemd[1]: Reached target Slice Units.
Jan 27 07:46:57 localhost systemd[1]: Reached target Swaps.
Jan 27 07:46:57 localhost systemd[1]: Reached target Timer Units.
Jan 27 07:46:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 07:46:57 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 27 07:46:57 localhost systemd[1]: Listening on Journal Socket.
Jan 27 07:46:57 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 07:46:57 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 07:46:57 localhost systemd[1]: Reached target Socket Units.
Jan 27 07:46:57 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 07:46:57 localhost systemd[1]: Starting Journal Service...
Jan 27 07:46:57 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 07:46:57 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 07:46:57 localhost systemd[1]: Starting Create System Users...
Jan 27 07:46:57 localhost systemd[1]: Starting Setup Virtual Console...
Jan 27 07:46:57 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 07:46:57 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 07:46:57 localhost systemd-journald[307]: Journal started
Jan 27 07:46:57 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/da3c96461d5e49f1b628025d3ab5e115) is 8.0M, max 153.6M, 145.6M free.
Jan 27 07:46:57 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Jan 27 07:46:57 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Jan 27 07:46:57 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 27 07:46:57 localhost systemd[1]: Started Journal Service.
Jan 27 07:46:57 localhost systemd[1]: Finished Create System Users.
Jan 27 07:46:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 07:46:57 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 07:46:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 07:46:57 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 07:46:57 localhost systemd[1]: Finished Setup Virtual Console.
Jan 27 07:46:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 27 07:46:57 localhost systemd[1]: Starting dracut cmdline hook...
Jan 27 07:46:57 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Jan 27 07:46:57 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 07:46:57 localhost systemd[1]: Finished dracut cmdline hook.
Jan 27 07:46:57 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 27 07:46:57 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 27 07:46:57 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 27 07:46:57 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 27 07:46:57 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 27 07:46:57 localhost kernel: RPC: Registered udp transport module.
Jan 27 07:46:57 localhost kernel: RPC: Registered tcp transport module.
Jan 27 07:46:57 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 27 07:46:57 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 27 07:46:58 localhost rpc.statd[445]: Version 2.5.4 starting
Jan 27 07:46:58 localhost rpc.statd[445]: Initializing NSM state
Jan 27 07:46:58 localhost rpc.idmapd[450]: Setting log level to 0
Jan 27 07:46:58 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 27 07:46:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 07:46:58 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 07:46:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 07:46:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 27 07:46:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 27 07:46:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 07:46:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 27 07:46:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 07:46:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 07:46:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 07:46:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 07:46:58 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 27 07:46:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 07:46:58 localhost systemd[1]: Reached target Network.
Jan 27 07:46:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 07:46:58 localhost systemd[1]: Starting dracut initqueue hook...
Jan 27 07:46:58 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 27 07:46:58 localhost systemd[1]: Reached target System Initialization.
Jan 27 07:46:58 localhost systemd[1]: Reached target Basic System.
Jan 27 07:46:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 27 07:46:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 27 07:46:58 localhost kernel:  vda: vda1
Jan 27 07:46:58 localhost systemd-udevd[486]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 07:46:58 localhost kernel: libata version 3.00 loaded.
Jan 27 07:46:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 27 07:46:58 localhost kernel: scsi host0: ata_piix
Jan 27 07:46:58 localhost kernel: scsi host1: ata_piix
Jan 27 07:46:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 27 07:46:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 27 07:46:58 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 07:46:58 localhost systemd[1]: Reached target Initrd Root Device.
Jan 27 07:46:58 localhost kernel: ata1: found unknown device (class 0)
Jan 27 07:46:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 27 07:46:58 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 27 07:46:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 27 07:46:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 27 07:46:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 27 07:46:58 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 27 07:46:58 localhost systemd[1]: Finished dracut initqueue hook.
Jan 27 07:46:58 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 07:46:58 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 27 07:46:58 localhost systemd[1]: Reached target Remote File Systems.
Jan 27 07:46:58 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 27 07:46:58 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 27 07:46:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 27 07:46:58 localhost systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Jan 27 07:46:58 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 07:46:58 localhost systemd[1]: Mounting /sysroot...
Jan 27 07:46:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 27 07:46:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 27 07:46:59 localhost kernel: XFS (vda1): Ending clean mount
Jan 27 07:46:59 localhost systemd[1]: Mounted /sysroot.
Jan 27 07:46:59 localhost systemd[1]: Reached target Initrd Root File System.
Jan 27 07:46:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 27 07:46:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 27 07:46:59 localhost systemd[1]: Reached target Initrd File Systems.
Jan 27 07:46:59 localhost systemd[1]: Reached target Initrd Default Target.
Jan 27 07:46:59 localhost systemd[1]: Starting dracut mount hook...
Jan 27 07:46:59 localhost systemd[1]: Finished dracut mount hook.
Jan 27 07:46:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 27 07:46:59 localhost rpc.idmapd[450]: exiting on signal 15
Jan 27 07:46:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 27 07:46:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 27 07:46:59 localhost systemd[1]: Stopped target Network.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Timer Units.
Jan 27 07:46:59 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 27 07:46:59 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Basic System.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Path Units.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Remote File Systems.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Slice Units.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Socket Units.
Jan 27 07:46:59 localhost systemd[1]: Stopped target System Initialization.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Local File Systems.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Swaps.
Jan 27 07:46:59 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut mount hook.
Jan 27 07:46:59 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 27 07:46:59 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 27 07:46:59 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 27 07:46:59 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 27 07:46:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 27 07:46:59 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 27 07:46:59 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 27 07:46:59 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 27 07:46:59 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 27 07:46:59 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 27 07:46:59 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 27 07:46:59 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 27 07:46:59 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Closed udev Control Socket.
Jan 27 07:46:59 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Closed udev Kernel Socket.
Jan 27 07:46:59 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 27 07:46:59 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 27 07:46:59 localhost systemd[1]: Starting Cleanup udev Database...
Jan 27 07:46:59 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 27 07:46:59 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 27 07:46:59 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Stopped Create System Users.
Jan 27 07:46:59 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 27 07:46:59 localhost systemd[1]: Finished Cleanup udev Database.
Jan 27 07:46:59 localhost systemd[1]: Reached target Switch Root.
Jan 27 07:46:59 localhost systemd[1]: Starting Switch Root...
Jan 27 07:46:59 localhost systemd[1]: Switching root.
Jan 27 07:46:59 localhost systemd-journald[307]: Journal stopped
Jan 27 07:47:00 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Jan 27 07:47:00 localhost kernel: audit: type=1404 audit(1769500019.981:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability open_perms=1
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 07:47:00 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 07:47:00 localhost kernel: audit: type=1403 audit(1769500020.160:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 27 07:47:00 localhost systemd[1]: Successfully loaded SELinux policy in 184.302ms.
Jan 27 07:47:00 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.666ms.
Jan 27 07:47:00 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 07:47:00 localhost systemd[1]: Detected virtualization kvm.
Jan 27 07:47:00 localhost systemd[1]: Detected architecture x86-64.
Jan 27 07:47:00 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 07:47:00 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Stopped Switch Root.
Jan 27 07:47:00 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 27 07:47:00 localhost systemd[1]: Created slice Slice /system/getty.
Jan 27 07:47:00 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 27 07:47:00 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 27 07:47:00 localhost systemd[1]: Created slice User and Session Slice.
Jan 27 07:47:00 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 07:47:00 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 27 07:47:00 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 27 07:47:00 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 07:47:00 localhost systemd[1]: Stopped target Switch Root.
Jan 27 07:47:00 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 27 07:47:00 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 27 07:47:00 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 27 07:47:00 localhost systemd[1]: Reached target Path Units.
Jan 27 07:47:00 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 27 07:47:00 localhost systemd[1]: Reached target Slice Units.
Jan 27 07:47:00 localhost systemd[1]: Reached target Swaps.
Jan 27 07:47:00 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 27 07:47:00 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 27 07:47:00 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 27 07:47:00 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 27 07:47:00 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 27 07:47:00 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 07:47:00 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 07:47:00 localhost systemd[1]: Mounting Huge Pages File System...
Jan 27 07:47:00 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 27 07:47:00 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 27 07:47:00 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 27 07:47:00 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 07:47:00 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 07:47:00 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 07:47:00 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 27 07:47:00 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 27 07:47:00 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 27 07:47:00 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 27 07:47:00 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 27 07:47:00 localhost systemd[1]: Stopped Journal Service.
Jan 27 07:47:00 localhost systemd[1]: Starting Journal Service...
Jan 27 07:47:00 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 07:47:00 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 27 07:47:00 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 07:47:00 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 27 07:47:00 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 27 07:47:00 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 07:47:00 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 07:47:00 localhost kernel: fuse: init (API version 7.37)
Jan 27 07:47:00 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 27 07:47:00 localhost systemd[1]: Mounted Huge Pages File System.
Jan 27 07:47:00 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 27 07:47:00 localhost systemd-journald[681]: Journal started
Jan 27 07:47:00 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 07:47:00 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 27 07:47:00 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Started Journal Service.
Jan 27 07:47:00 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 27 07:47:00 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 27 07:47:00 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 07:47:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 07:47:00 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 27 07:47:00 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 27 07:47:00 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 27 07:47:00 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 27 07:47:00 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 27 07:47:00 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 07:47:00 localhost systemd[1]: Mounting FUSE Control File System...
Jan 27 07:47:00 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 07:47:00 localhost kernel: ACPI: bus type drm_connector registered
Jan 27 07:47:00 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 27 07:47:00 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 27 07:47:00 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 27 07:47:00 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 27 07:47:00 localhost systemd[1]: Starting Create System Users...
Jan 27 07:47:00 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 27 07:47:00 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 27 07:47:00 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 07:47:00 localhost systemd-journald[681]: Received client request to flush runtime journal.
Jan 27 07:47:00 localhost systemd[1]: Mounted FUSE Control File System.
Jan 27 07:47:00 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 27 07:47:00 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 27 07:47:00 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 07:47:00 localhost systemd[1]: Finished Create System Users.
Jan 27 07:47:00 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 07:47:00 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 07:47:00 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 07:47:01 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 27 07:47:01 localhost systemd[1]: Reached target Local File Systems.
Jan 27 07:47:01 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 27 07:47:01 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 27 07:47:01 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 27 07:47:01 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 27 07:47:01 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 27 07:47:01 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 27 07:47:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 07:47:01 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Jan 27 07:47:01 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 27 07:47:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 07:47:01 localhost systemd[1]: Starting Security Auditing Service...
Jan 27 07:47:01 localhost systemd[1]: Starting RPC Bind...
Jan 27 07:47:01 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 27 07:47:01 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 27 07:47:01 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 27 07:47:01 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 27 07:47:01 localhost systemd[1]: Started RPC Bind.
Jan 27 07:47:01 localhost augenrules[710]: /sbin/augenrules: No change
Jan 27 07:47:01 localhost augenrules[725]: No rules
Jan 27 07:47:01 localhost augenrules[725]: enabled 1
Jan 27 07:47:01 localhost augenrules[725]: failure 1
Jan 27 07:47:01 localhost augenrules[725]: pid 705
Jan 27 07:47:01 localhost augenrules[725]: rate_limit 0
Jan 27 07:47:01 localhost augenrules[725]: backlog_limit 8192
Jan 27 07:47:01 localhost augenrules[725]: lost 0
Jan 27 07:47:01 localhost augenrules[725]: backlog 4
Jan 27 07:47:01 localhost augenrules[725]: backlog_wait_time 60000
Jan 27 07:47:01 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 27 07:47:01 localhost augenrules[725]: enabled 1
Jan 27 07:47:01 localhost augenrules[725]: failure 1
Jan 27 07:47:01 localhost augenrules[725]: pid 705
Jan 27 07:47:01 localhost augenrules[725]: rate_limit 0
Jan 27 07:47:01 localhost augenrules[725]: backlog_limit 8192
Jan 27 07:47:01 localhost augenrules[725]: lost 0
Jan 27 07:47:01 localhost augenrules[725]: backlog 4
Jan 27 07:47:01 localhost augenrules[725]: backlog_wait_time 60000
Jan 27 07:47:01 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 27 07:47:01 localhost augenrules[725]: enabled 1
Jan 27 07:47:01 localhost augenrules[725]: failure 1
Jan 27 07:47:01 localhost augenrules[725]: pid 705
Jan 27 07:47:01 localhost augenrules[725]: rate_limit 0
Jan 27 07:47:01 localhost augenrules[725]: backlog_limit 8192
Jan 27 07:47:01 localhost augenrules[725]: lost 0
Jan 27 07:47:01 localhost augenrules[725]: backlog 3
Jan 27 07:47:01 localhost augenrules[725]: backlog_wait_time 60000
Jan 27 07:47:01 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 27 07:47:01 localhost systemd[1]: Started Security Auditing Service.
Jan 27 07:47:01 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 27 07:47:01 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 27 07:47:01 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 27 07:47:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 07:47:01 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 27 07:47:01 localhost systemd[1]: Starting Update is Completed...
Jan 27 07:47:01 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 07:47:01 localhost systemd[1]: Finished Update is Completed.
Jan 27 07:47:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 07:47:01 localhost systemd[1]: Reached target System Initialization.
Jan 27 07:47:01 localhost systemd[1]: Started dnf makecache --timer.
Jan 27 07:47:01 localhost systemd[1]: Started Daily rotation of log files.
Jan 27 07:47:01 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 27 07:47:01 localhost systemd[1]: Reached target Timer Units.
Jan 27 07:47:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 07:47:01 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 27 07:47:01 localhost systemd[1]: Reached target Socket Units.
Jan 27 07:47:01 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 27 07:47:01 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 07:47:01 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 27 07:47:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 07:47:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 07:47:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 07:47:01 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 07:47:01 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 27 07:47:01 localhost systemd[1]: Reached target Basic System.
Jan 27 07:47:01 localhost dbus-broker-lau[762]: Ready
Jan 27 07:47:01 localhost systemd[1]: Starting NTP client/server...
Jan 27 07:47:01 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 27 07:47:01 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 27 07:47:01 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 27 07:47:01 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 27 07:47:01 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 27 07:47:01 localhost chronyd[786]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 07:47:01 localhost chronyd[786]: Loaded 0 symmetric keys
Jan 27 07:47:01 localhost chronyd[786]: Using right/UTC timezone to obtain leap second data
Jan 27 07:47:01 localhost chronyd[786]: Loaded seccomp filter (level 2)
Jan 27 07:47:01 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 27 07:47:01 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 27 07:47:01 localhost systemd[1]: Started irqbalance daemon.
Jan 27 07:47:01 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 27 07:47:01 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 07:47:01 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 07:47:01 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 07:47:01 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 27 07:47:01 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 27 07:47:01 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 27 07:47:01 localhost systemd[1]: Starting User Login Management...
Jan 27 07:47:01 localhost systemd[1]: Started NTP client/server.
Jan 27 07:47:01 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 27 07:47:01 localhost kernel: kvm_amd: TSC scaling supported
Jan 27 07:47:01 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 27 07:47:01 localhost kernel: kvm_amd: Nested Paging enabled
Jan 27 07:47:01 localhost kernel: kvm_amd: LBR virtualization supported
Jan 27 07:47:01 localhost systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 07:47:01 localhost systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 07:47:01 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 27 07:47:01 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 27 07:47:01 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 27 07:47:01 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 27 07:47:01 localhost systemd-logind[799]: New seat seat0.
Jan 27 07:47:01 localhost systemd[1]: Started User Login Management.
Jan 27 07:47:01 localhost kernel: Console: switching to colour dummy device 80x25
Jan 27 07:47:01 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 27 07:47:01 localhost kernel: [drm] features: -context_init
Jan 27 07:47:01 localhost kernel: [drm] number of scanouts: 1
Jan 27 07:47:01 localhost kernel: [drm] number of cap sets: 0
Jan 27 07:47:01 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 27 07:47:01 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 27 07:47:01 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 27 07:47:01 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 27 07:47:01 localhost iptables.init[791]: iptables: Applying firewall rules: [  OK  ]
Jan 27 07:47:01 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 27 07:47:02 localhost cloud-init[842]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 27 Jan 2026 07:47:02 +0000. Up 7.11 seconds.
Jan 27 07:47:02 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 27 07:47:02 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 27 07:47:02 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpfjrq8zlu.mount: Deactivated successfully.
Jan 27 07:47:02 localhost systemd[1]: Starting Hostname Service...
Jan 27 07:47:02 localhost systemd[1]: Started Hostname Service.
Jan 27 07:47:02 np0005597076.novalocal systemd-hostnamed[856]: Hostname set to <np0005597076.novalocal> (static)
Jan 27 07:47:02 np0005597076.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 27 07:47:02 np0005597076.novalocal systemd[1]: Reached target Preparation for Network.
Jan 27 07:47:02 np0005597076.novalocal systemd[1]: Starting Network Manager...
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0415] NetworkManager (version 1.54.3-2.el9) is starting... (boot:f8a94f5b-78c7-40b7-8763-152a695f2532)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0422] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0583] manager[0x55ef1e638000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0637] hostname: hostname: using hostnamed
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0637] hostname: static hostname changed from (none) to "np0005597076.novalocal"
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0642] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0759] manager[0x55ef1e638000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0760] manager[0x55ef1e638000]: rfkill: WWAN hardware radio set enabled
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0844] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0844] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0845] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0845] manager: Networking is enabled by state file
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0847] settings: Loaded settings plugin: keyfile (internal)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0881] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0908] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0924] dhcp: init: Using DHCP client 'internal'
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0927] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0938] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0952] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0962] device (lo): Activation: starting connection 'lo' (f9f7e4cf-a182-47b9-990d-3db4b4bd0790)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0971] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0974] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Started Network Manager.
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.0999] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1003] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1005] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1007] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1008] device (eth0): carrier: link connected
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1011] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Reached target Network.
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1026] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1042] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1047] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1048] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1051] manager: NetworkManager state is now CONNECTING
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1054] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1063] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1067] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1151] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1154] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 07:47:03 np0005597076.novalocal NetworkManager[860]: <info>  [1769500023.1161] device (lo): Activation: successful, device activated.
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Reached target NFS client services.
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: Reached target Remote File Systems.
Jan 27 07:47:03 np0005597076.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3013] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3033] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3070] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3095] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3099] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3106] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3111] device (eth0): Activation: successful, device activated.
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3118] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 07:47:04 np0005597076.novalocal NetworkManager[860]: <info>  [1769500024.3124] manager: startup complete
Jan 27 07:47:04 np0005597076.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 07:47:04 np0005597076.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 27 Jan 2026 07:47:04 +0000. Up 9.29 seconds.
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.128         | 255.255.255.0 | global | fa:16:3e:bc:15:0e |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:febc:150e/64 |       .       |  link  | fa:16:3e:bc:15:0e |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 27 07:47:04 np0005597076.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 07:47:05 np0005597076.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Jan 27 07:47:05 np0005597076.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 27 07:47:05 np0005597076.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Jan 27 07:47:05 np0005597076.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Jan 27 07:47:05 np0005597076.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Jan 27 07:47:05 np0005597076.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Generating public/private rsa key pair.
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: The key fingerprint is:
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: SHA256:deEN7xVd2PQYcHMbwa+PY22NohnvlXSycvhqx2d8E/8 root@np0005597076.novalocal
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: The key's randomart image is:
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: +---[RSA 3072]----+
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |            +.=BB|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |           . *.*B|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |          . o +.+|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |         . . . ..|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |        S     +..|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |             o.* |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |          . o.=*+|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |           +o=B.@|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |          o++=.=E|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: The key fingerprint is:
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: SHA256:tkWpekSQyzMsDNpUIfTAsj4a5Hdld/CqDHrcLTTKEag root@np0005597076.novalocal
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: The key's randomart image is:
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: +---[ECDSA 256]---+
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: | o+.o...         |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |. ++  .. . .     |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: | * ooo .. =      |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |o...o.*+ + o     |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |+ .  .+oS +      |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |.E . + * +       |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |..o = O *        |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |.  . = * .       |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |    .   .        |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: The key fingerprint is:
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: SHA256:rH5VVUWOnqx86EPvWDI/9Hg+hizntcTJGb7wQf8E1fo root@np0005597076.novalocal
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: The key's randomart image is:
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: +--[ED25519 256]--+
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |               o=|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |              .o.|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |             .. +|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |       .    .o + |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |        S  .  *o |
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |       .  ...o*o=|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |      .  . .B++%E|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |     .  .  o.@O+B|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: |      ..    *++B+|
Jan 27 07:47:06 np0005597076.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Reached target Network is Online.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting System Logging Service...
Jan 27 07:47:06 np0005597076.novalocal sm-notify[1006]: Version 2.5.4 starting
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting Permit User Sessions...
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 27 07:47:06 np0005597076.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 27 07:47:06 np0005597076.novalocal sshd[1008]: Server listening on :: port 22.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Finished Permit User Sessions.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Started Command Scheduler.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Started Getty on tty1.
Jan 27 07:47:06 np0005597076.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Jan 27 07:47:06 np0005597076.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 27 07:47:06 np0005597076.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 58% if used.)
Jan 27 07:47:06 np0005597076.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Reached target Login Prompts.
Jan 27 07:47:06 np0005597076.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 27 07:47:06 np0005597076.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Started System Logging Service.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Reached target Multi-User System.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 27 07:47:06 np0005597076.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 07:47:06 np0005597076.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Jan 27 07:47:06 np0005597076.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 27 07:47:06 np0005597076.novalocal cloud-init[1095]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 27 Jan 2026 07:47:06 +0000. Up 11.21 seconds.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 27 07:47:06 np0005597076.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1176]: Connection closed by 38.102.83.114 port 42426 [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1196]: Unable to negotiate with 38.102.83.114 port 42438: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1205]: Connection reset by 38.102.83.114 port 42444 [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1220]: Unable to negotiate with 38.102.83.114 port 42458: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1226]: Unable to negotiate with 38.102.83.114 port 42462: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1241]: Connection reset by 38.102.83.114 port 42470 [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1253]: Unable to negotiate with 38.102.83.114 port 42486: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1262]: Unable to negotiate with 38.102.83.114 port 42498: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 27 07:47:06 np0005597076.novalocal cloud-init[1276]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 27 Jan 2026 07:47:06 +0000. Up 11.60 seconds.
Jan 27 07:47:06 np0005597076.novalocal sshd-session[1234]: Connection closed by 38.102.83.114 port 42468 [preauth]
Jan 27 07:47:07 np0005597076.novalocal dracut[1288]: dracut-057-102.git20250818.el9
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1305]: #############################################################
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1306]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1308]: 256 SHA256:tkWpekSQyzMsDNpUIfTAsj4a5Hdld/CqDHrcLTTKEag root@np0005597076.novalocal (ECDSA)
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1310]: 256 SHA256:rH5VVUWOnqx86EPvWDI/9Hg+hizntcTJGb7wQf8E1fo root@np0005597076.novalocal (ED25519)
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1312]: 3072 SHA256:deEN7xVd2PQYcHMbwa+PY22NohnvlXSycvhqx2d8E/8 root@np0005597076.novalocal (RSA)
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1313]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1314]: #############################################################
Jan 27 07:47:07 np0005597076.novalocal cloud-init[1276]: Cloud-init v. 24.4-8.el9 finished at Tue, 27 Jan 2026 07:47:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.84 seconds
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 27 07:47:07 np0005597076.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 27 07:47:07 np0005597076.novalocal systemd[1]: Reached target Cloud-init target.
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 07:47:07 np0005597076.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: memstrack is not available
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: memstrack is not available
Jan 27 07:47:08 np0005597076.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 07:47:09 np0005597076.novalocal dracut[1290]: *** Including module: systemd ***
Jan 27 07:47:09 np0005597076.novalocal chronyd[786]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Jan 27 07:47:09 np0005597076.novalocal chronyd[786]: System clock TAI offset set to 37 seconds
Jan 27 07:47:09 np0005597076.novalocal dracut[1290]: *** Including module: fips ***
Jan 27 07:47:09 np0005597076.novalocal dracut[1290]: *** Including module: systemd-initrd ***
Jan 27 07:47:09 np0005597076.novalocal dracut[1290]: *** Including module: i18n ***
Jan 27 07:47:09 np0005597076.novalocal dracut[1290]: *** Including module: drm ***
Jan 27 07:47:10 np0005597076.novalocal dracut[1290]: *** Including module: prefixdevname ***
Jan 27 07:47:10 np0005597076.novalocal dracut[1290]: *** Including module: kernel-modules ***
Jan 27 07:47:10 np0005597076.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: *** Including module: kernel-modules-extra ***
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: *** Including module: qemu ***
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: *** Including module: fstab-sys ***
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: *** Including module: rootfs-block ***
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: *** Including module: terminfo ***
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: *** Including module: udev-rules ***
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: Skipping udev rule: 91-permissions.rules
Jan 27 07:47:11 np0005597076.novalocal dracut[1290]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: virtiofs ***
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: dracut-systemd ***
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: usrmount ***
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: base ***
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: IRQ 25 affinity is now unmanaged
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 27 07:47:12 np0005597076.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: fs-lib ***
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: kdumpbase ***
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]:   microcode_ctl module: mangling fw_dir
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 27 07:47:12 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]: *** Including module: openssl ***
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]: *** Including module: shutdown ***
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]: *** Including module: squash ***
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]: *** Including modules done ***
Jan 27 07:47:13 np0005597076.novalocal dracut[1290]: *** Installing kernel module dependencies ***
Jan 27 07:47:14 np0005597076.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:47:14 np0005597076.novalocal dracut[1290]: *** Installing kernel module dependencies done ***
Jan 27 07:47:14 np0005597076.novalocal dracut[1290]: *** Resolving executable dependencies ***
Jan 27 07:47:16 np0005597076.novalocal dracut[1290]: *** Resolving executable dependencies done ***
Jan 27 07:47:16 np0005597076.novalocal dracut[1290]: *** Generating early-microcode cpio image ***
Jan 27 07:47:16 np0005597076.novalocal dracut[1290]: *** Store current command line parameters ***
Jan 27 07:47:16 np0005597076.novalocal dracut[1290]: Stored kernel commandline:
Jan 27 07:47:16 np0005597076.novalocal dracut[1290]: No dracut internal kernel commandline stored in the initramfs
Jan 27 07:47:16 np0005597076.novalocal dracut[1290]: *** Install squash loader ***
Jan 27 07:47:17 np0005597076.novalocal dracut[1290]: *** Squashing the files inside the initramfs ***
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: *** Squashing the files inside the initramfs done ***
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: *** Hardlinking files ***
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Mode:           real
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Files:          50
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Linked:         0 files
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Compared:       0 xattrs
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Compared:       0 files
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Saved:          0 B
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: Duration:       0.000444 seconds
Jan 27 07:47:18 np0005597076.novalocal dracut[1290]: *** Hardlinking files done ***
Jan 27 07:47:19 np0005597076.novalocal dracut[1290]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 27 07:47:19 np0005597076.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Jan 27 07:47:19 np0005597076.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Jan 27 07:47:19 np0005597076.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 27 07:47:19 np0005597076.novalocal systemd[1]: Startup finished in 1.709s (kernel) + 2.934s (initrd) + 19.953s (userspace) = 24.597s.
Jan 27 07:47:29 np0005597076.novalocal sshd-session[4304]: Accepted publickey for zuul from 38.102.83.114 port 37290 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 27 07:47:29 np0005597076.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 27 07:47:29 np0005597076.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 27 07:47:29 np0005597076.novalocal systemd-logind[799]: New session 1 of user zuul.
Jan 27 07:47:29 np0005597076.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 27 07:47:29 np0005597076.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 27 07:47:29 np0005597076.novalocal systemd[4308]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Queued start job for default target Main User Target.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Created slice User Application Slice.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Reached target Paths.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Reached target Timers.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Starting D-Bus User Message Bus Socket...
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Starting Create User's Volatile Files and Directories...
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Finished Create User's Volatile Files and Directories.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Listening on D-Bus User Message Bus Socket.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Reached target Sockets.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Reached target Basic System.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Reached target Main User Target.
Jan 27 07:47:30 np0005597076.novalocal systemd[4308]: Startup finished in 162ms.
Jan 27 07:47:30 np0005597076.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 27 07:47:30 np0005597076.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 27 07:47:30 np0005597076.novalocal sshd-session[4304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 07:47:30 np0005597076.novalocal python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:47:33 np0005597076.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 07:47:33 np0005597076.novalocal python3[4420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:47:42 np0005597076.novalocal python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:47:44 np0005597076.novalocal python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 27 07:47:46 np0005597076.novalocal python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLRiJZHOtFM5RDu7h1LIy9uM6G/N9ErVMr7YBR3ZGQ29Fz4Ec7m9++N502NOhjiB6h1jDZJ1SRCWTYd7hj8uvZh50zidSrrUKlkOc6drDSq45aflBQXQ5WFs2iSQzVt1a+PrOHJUCgZJUD/NI5+1ZnBP8xqCk1oODKfrB1YCRcS7TXTxlwefB1Mcm8n0Zo6DFYrl55fjViwlYjRAk6x1LJYExcPHf6gqv9oUTxOWWFJULvku94tht/U+Lh5Dp6eU1KQTxzgmZPbbrmAd0oth5losRJPJZY98WOrzWK0YuSybuMB5IOZIT67V0CD8ZOzzKOSl8OqDPmnu41fLceHb2XxRGrt0b6vyI+tMlLnYSzlKI0r4QTCqqKzFgTWkVoG4cSoUDJNdywxpFfwMDH2GlLG7fMrpgEa9kGSOO/DNtBMbCG3jofUrEF+6IYLjf6gmc0xKEVIcX8JxhTldAxR1moCpQqvrzDYeWKvKGxb+DNyGZCP+eGAU4TfWWtvlFQYrM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:47:47 np0005597076.novalocal python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:47 np0005597076.novalocal python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:47:48 np0005597076.novalocal python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769500067.6202767-251-183277718361087/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1e6029d2332149d58afc0086bb823111_id_rsa follow=False checksum=0c080731b865fb0e8c6c0d307c44d292a3bc10b7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:49 np0005597076.novalocal python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:47:49 np0005597076.novalocal python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769500068.6655087-306-202644649660746/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1e6029d2332149d58afc0086bb823111_id_rsa.pub follow=False checksum=d0cf99f6e30d6cac2bb6ffc7514e7d7c448187da backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:50 np0005597076.novalocal python3[4980]: ansible-ping Invoked with data=pong
Jan 27 07:47:51 np0005597076.novalocal python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:47:54 np0005597076.novalocal python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 27 07:47:56 np0005597076.novalocal python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:56 np0005597076.novalocal python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:56 np0005597076.novalocal python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:57 np0005597076.novalocal python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:57 np0005597076.novalocal python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:57 np0005597076.novalocal python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:59 np0005597076.novalocal sudo[5238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xatwoatzldcjfqcolzriwuyhfhseloxm ; /usr/bin/python3'
Jan 27 07:47:59 np0005597076.novalocal sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:47:59 np0005597076.novalocal python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:59 np0005597076.novalocal sudo[5238]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:00 np0005597076.novalocal sudo[5316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipkcflcfmjgvzakawmverrirdeunomdr ; /usr/bin/python3'
Jan 27 07:48:00 np0005597076.novalocal sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:00 np0005597076.novalocal python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:48:00 np0005597076.novalocal sudo[5316]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:00 np0005597076.novalocal sudo[5389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpegustppieslrjiibqlqmpdlcuezhwo ; /usr/bin/python3'
Jan 27 07:48:00 np0005597076.novalocal sudo[5389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:00 np0005597076.novalocal python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769500079.6985168-31-239002950490640/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:00 np0005597076.novalocal sudo[5389]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:01 np0005597076.novalocal python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:01 np0005597076.novalocal python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:01 np0005597076.novalocal python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:02 np0005597076.novalocal python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:02 np0005597076.novalocal python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:02 np0005597076.novalocal python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:02 np0005597076.novalocal python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:03 np0005597076.novalocal python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:03 np0005597076.novalocal python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:03 np0005597076.novalocal python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:04 np0005597076.novalocal python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:04 np0005597076.novalocal python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:04 np0005597076.novalocal python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:04 np0005597076.novalocal python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:05 np0005597076.novalocal python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:05 np0005597076.novalocal python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:05 np0005597076.novalocal python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:06 np0005597076.novalocal python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:06 np0005597076.novalocal python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:06 np0005597076.novalocal python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:07 np0005597076.novalocal python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:07 np0005597076.novalocal python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:07 np0005597076.novalocal python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:07 np0005597076.novalocal python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:08 np0005597076.novalocal python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:08 np0005597076.novalocal python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:48:10 np0005597076.novalocal sudo[6063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufdgvippzquvkdykaaldeaoebwtbkzpt ; /usr/bin/python3'
Jan 27 07:48:10 np0005597076.novalocal sudo[6063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:11 np0005597076.novalocal python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 07:48:11 np0005597076.novalocal systemd[1]: Starting Time & Date Service...
Jan 27 07:48:11 np0005597076.novalocal systemd[1]: Started Time & Date Service.
Jan 27 07:48:11 np0005597076.novalocal systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 27 07:48:11 np0005597076.novalocal sudo[6063]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:11 np0005597076.novalocal sudo[6094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firdanzsrgmhgwecnmdbkyvrqjyrpctd ; /usr/bin/python3'
Jan 27 07:48:11 np0005597076.novalocal sudo[6094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:11 np0005597076.novalocal python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:11 np0005597076.novalocal sudo[6094]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:12 np0005597076.novalocal python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:48:12 np0005597076.novalocal python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769500091.9248767-251-89286111167711/source _original_basename=tmpc1sxap07 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:13 np0005597076.novalocal python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:48:13 np0005597076.novalocal python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769500092.768845-301-227456354329837/source _original_basename=tmppm61skxz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:14 np0005597076.novalocal sudo[6514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlthdwullzzrjwjccxfoqacliqrtdqft ; /usr/bin/python3'
Jan 27 07:48:14 np0005597076.novalocal sudo[6514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:14 np0005597076.novalocal python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:48:14 np0005597076.novalocal sudo[6514]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:14 np0005597076.novalocal sudo[6587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jojdeuevsagdxzfaqxyrmomsiiykwrkf ; /usr/bin/python3'
Jan 27 07:48:14 np0005597076.novalocal sudo[6587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:14 np0005597076.novalocal python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769500094.0977108-381-124369285936626/source _original_basename=tmp8xf1atg0 follow=False checksum=ef41ffc2d4a8b9f73488a75ae66bcded72c1a415 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:14 np0005597076.novalocal sudo[6587]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:15 np0005597076.novalocal python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:48:15 np0005597076.novalocal python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:48:16 np0005597076.novalocal sudo[6741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmxrnhmzxoyxlikabpxcrwwuruzhkoaa ; /usr/bin/python3'
Jan 27 07:48:16 np0005597076.novalocal sudo[6741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:16 np0005597076.novalocal python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:48:16 np0005597076.novalocal sudo[6741]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:16 np0005597076.novalocal sudo[6814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingqfweoillkoagnmylxhrqrddljnoys ; /usr/bin/python3'
Jan 27 07:48:16 np0005597076.novalocal sudo[6814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:16 np0005597076.novalocal python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769500095.901137-451-37242908032045/source _original_basename=tmpvogqy17u follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:16 np0005597076.novalocal sudo[6814]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:17 np0005597076.novalocal sudo[6865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lflfdawnwpjjfszjvhdzfmypyrjohaof ; /usr/bin/python3'
Jan 27 07:48:17 np0005597076.novalocal sudo[6865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:17 np0005597076.novalocal python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-c639-f9a2-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:48:17 np0005597076.novalocal sudo[6865]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:18 np0005597076.novalocal python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-c639-f9a2-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 27 07:48:19 np0005597076.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:22 np0005597076.novalocal irqbalance[793]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 27 07:48:22 np0005597076.novalocal irqbalance[793]: IRQ 27 affinity is now unmanaged
Jan 27 07:48:38 np0005597076.novalocal sudo[6947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvovmoibmbwmfoajphibhgrlzufmffpf ; /usr/bin/python3'
Jan 27 07:48:38 np0005597076.novalocal sudo[6947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:48:38 np0005597076.novalocal python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:48:38 np0005597076.novalocal sudo[6947]: pam_unix(sudo:session): session closed for user root
Jan 27 07:48:41 np0005597076.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 27 07:49:22 np0005597076.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 27 07:49:22 np0005597076.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6248] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 07:49:22 np0005597076.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6413] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6438] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6442] device (eth1): carrier: link connected
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6444] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6450] policy: auto-activating connection 'Wired connection 1' (5dcced6c-1ee6-334d-9b36-b61314403afd)
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6453] device (eth1): Activation: starting connection 'Wired connection 1' (5dcced6c-1ee6-334d-9b36-b61314403afd)
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6454] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6458] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6462] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 07:49:22 np0005597076.novalocal NetworkManager[860]: <info>  [1769500162.6468] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:49:23 np0005597076.novalocal python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-6eb9-3b91-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:49:33 np0005597076.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swlogcfahzuoigukagfhxjdlzsnoppnc ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 07:49:33 np0005597076.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:49:33 np0005597076.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:49:33 np0005597076.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Jan 27 07:49:34 np0005597076.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbkjkpqgkinmsdcafgpzuhqvllsrfbvj ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 07:49:34 np0005597076.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:49:34 np0005597076.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769500173.5871696-104-238271177878738/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=ce99275b728c971b2b10f9a100636205ea0ed206 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:49:34 np0005597076.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Jan 27 07:49:34 np0005597076.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bokisymamccbfekviyppgfhhrikskudn ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 07:49:34 np0005597076.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:49:34 np0005597076.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Stopping Network Manager...
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0263] caught SIGTERM, shutting down normally.
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0273] dhcp4 (eth0): canceled DHCP transaction
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0273] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0273] dhcp4 (eth0): state changed no lease
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0276] manager: NetworkManager state is now CONNECTING
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0379] dhcp4 (eth1): canceled DHCP transaction
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0379] dhcp4 (eth1): state changed no lease
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[860]: <info>  [1769500175.0476] exiting (success)
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Stopped Network Manager.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: NetworkManager.service: Consumed 1.298s CPU time, 9.9M memory peak.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Starting Network Manager...
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.1248] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:f8a94f5b-78c7-40b7-8763-152a695f2532)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.1252] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.1317] manager[0x55b07114a000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Starting Hostname Service...
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Started Hostname Service.
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2051] hostname: hostname: using hostnamed
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2053] hostname: static hostname changed from (none) to "np0005597076.novalocal"
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2057] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2061] manager[0x55b07114a000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2062] manager[0x55b07114a000]: rfkill: WWAN hardware radio set enabled
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2085] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2085] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2086] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2086] manager: Networking is enabled by state file
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2089] settings: Loaded settings plugin: keyfile (internal)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2092] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2115] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2124] dhcp: init: Using DHCP client 'internal'
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2126] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2130] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2135] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2143] device (lo): Activation: starting connection 'lo' (f9f7e4cf-a182-47b9-990d-3db4b4bd0790)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2149] device (eth0): carrier: link connected
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2153] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2159] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2159] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2165] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2171] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2176] device (eth1): carrier: link connected
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2179] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2185] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5dcced6c-1ee6-334d-9b36-b61314403afd) (indicated)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2185] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2190] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2197] device (eth1): Activation: starting connection 'Wired connection 1' (5dcced6c-1ee6-334d-9b36-b61314403afd)
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Started Network Manager.
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2204] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2208] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2211] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2213] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2215] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2218] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2220] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2222] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2224] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2228] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2231] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2238] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2240] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2253] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2257] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2262] device (lo): Activation: successful, device activated.
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2279] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2283] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 07:49:35 np0005597076.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2400] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2412] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2413] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2415] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2417] device (eth0): Activation: successful, device activated.
Jan 27 07:49:35 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500175.2420] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 07:49:35 np0005597076.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Jan 27 07:49:35 np0005597076.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-6eb9-3b91-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:49:45 np0005597076.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:49:51 np0005597076.novalocal systemd[4308]: Starting Mark boot as successful...
Jan 27 07:49:51 np0005597076.novalocal systemd[4308]: Finished Mark boot as successful.
Jan 27 07:50:05 np0005597076.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3259] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 07:50:20 np0005597076.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:50:20 np0005597076.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3601] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3605] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3619] device (eth1): Activation: successful, device activated.
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3630] manager: startup complete
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3633] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <warn>  [1769500220.3643] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3653] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3769] dhcp4 (eth1): canceled DHCP transaction
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3770] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3770] dhcp4 (eth1): state changed no lease
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3790] policy: auto-activating connection 'ci-private-network' (106c46df-b45f-5088-8dfe-552add023723)
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3796] device (eth1): Activation: starting connection 'ci-private-network' (106c46df-b45f-5088-8dfe-552add023723)
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3797] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3802] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3812] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3824] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3880] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3883] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 07:50:20 np0005597076.novalocal NetworkManager[7198]: <info>  [1769500220.3894] device (eth1): Activation: successful, device activated.
Jan 27 07:50:30 np0005597076.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:50:35 np0005597076.novalocal sshd-session[4317]: Received disconnect from 38.102.83.114 port 37290:11: disconnected by user
Jan 27 07:50:35 np0005597076.novalocal sshd-session[4317]: Disconnected from user zuul 38.102.83.114 port 37290
Jan 27 07:50:35 np0005597076.novalocal sshd-session[4304]: pam_unix(sshd:session): session closed for user zuul
Jan 27 07:50:35 np0005597076.novalocal systemd-logind[799]: Session 1 logged out. Waiting for processes to exit.
Jan 27 07:51:42 np0005597076.novalocal sshd-session[7295]: Accepted publickey for zuul from 38.102.83.114 port 58996 ssh2: RSA SHA256:DNK1vimKiSKrooFcnqxgdgoquKxzk/KTmMzYIUmiqbw
Jan 27 07:51:42 np0005597076.novalocal systemd-logind[799]: New session 3 of user zuul.
Jan 27 07:51:42 np0005597076.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 27 07:51:42 np0005597076.novalocal sshd-session[7295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 07:51:43 np0005597076.novalocal sudo[7374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usinwvhfvqytrxbztytlbaqdkepibupt ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 07:51:43 np0005597076.novalocal sudo[7374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:51:43 np0005597076.novalocal python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:43 np0005597076.novalocal sudo[7374]: pam_unix(sudo:session): session closed for user root
Jan 27 07:51:43 np0005597076.novalocal sudo[7447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfqoboqhugjrmfrerjajxfrrxleexex ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 07:51:43 np0005597076.novalocal sudo[7447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:51:43 np0005597076.novalocal python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769500302.9036138-373-229490815312465/source _original_basename=tmpd3f0owc1 follow=False checksum=d4af2ab61d0b71c282d8abf6d3308f12900032f5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:43 np0005597076.novalocal sudo[7447]: pam_unix(sudo:session): session closed for user root
Jan 27 07:51:47 np0005597076.novalocal sshd-session[7298]: Connection closed by 38.102.83.114 port 58996
Jan 27 07:51:47 np0005597076.novalocal sshd-session[7295]: pam_unix(sshd:session): session closed for user zuul
Jan 27 07:51:47 np0005597076.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 27 07:51:47 np0005597076.novalocal systemd-logind[799]: Session 3 logged out. Waiting for processes to exit.
Jan 27 07:51:47 np0005597076.novalocal systemd-logind[799]: Removed session 3.
Jan 27 07:52:51 np0005597076.novalocal systemd[4308]: Created slice User Background Tasks Slice.
Jan 27 07:52:51 np0005597076.novalocal systemd[4308]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 07:52:51 np0005597076.novalocal systemd[4308]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 07:56:45 np0005597076.novalocal sshd-session[7480]: Accepted publickey for zuul from 38.102.83.114 port 46558 ssh2: RSA SHA256:DNK1vimKiSKrooFcnqxgdgoquKxzk/KTmMzYIUmiqbw
Jan 27 07:56:45 np0005597076.novalocal systemd-logind[799]: New session 4 of user zuul.
Jan 27 07:56:45 np0005597076.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 27 07:56:45 np0005597076.novalocal sshd-session[7480]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 07:56:45 np0005597076.novalocal sudo[7507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigrbezrscwklwucbortodplsoxwmucw ; /usr/bin/python3'
Jan 27 07:56:45 np0005597076.novalocal sudo[7507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:45 np0005597076.novalocal python3[7509]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-2568-0839-000000000ca0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:56:45 np0005597076.novalocal sudo[7507]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:45 np0005597076.novalocal sudo[7536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jncyaypxdfqoncjwteijxzyhhxpmcxlr ; /usr/bin/python3'
Jan 27 07:56:45 np0005597076.novalocal sudo[7536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:46 np0005597076.novalocal python3[7538]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:56:46 np0005597076.novalocal sudo[7536]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:46 np0005597076.novalocal sudo[7562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtgaqpplxcsuckmwsrjrtfwqunrbyol ; /usr/bin/python3'
Jan 27 07:56:46 np0005597076.novalocal sudo[7562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:46 np0005597076.novalocal python3[7564]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:56:46 np0005597076.novalocal sudo[7562]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:46 np0005597076.novalocal sudo[7588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsaluilyxxxkblwlnbaonsltwpfucde ; /usr/bin/python3'
Jan 27 07:56:46 np0005597076.novalocal sudo[7588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:46 np0005597076.novalocal python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:56:46 np0005597076.novalocal sudo[7588]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:46 np0005597076.novalocal sudo[7614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izsuckhikcmjzfvchspbviopgybejkmp ; /usr/bin/python3'
Jan 27 07:56:46 np0005597076.novalocal sudo[7614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:46 np0005597076.novalocal python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:56:46 np0005597076.novalocal sudo[7614]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:47 np0005597076.novalocal sudo[7640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuiibqymejruyetmxzcufiteacmoevax ; /usr/bin/python3'
Jan 27 07:56:47 np0005597076.novalocal sudo[7640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:47 np0005597076.novalocal python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:56:47 np0005597076.novalocal sudo[7640]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:47 np0005597076.novalocal sudo[7718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqdzsomzheoscjmhsjxkbharwfftnfv ; /usr/bin/python3'
Jan 27 07:56:47 np0005597076.novalocal sudo[7718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:47 np0005597076.novalocal python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:56:47 np0005597076.novalocal sudo[7718]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:48 np0005597076.novalocal sudo[7791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vandgyxylywizuvaceqkjxxmboukpwqx ; /usr/bin/python3'
Jan 27 07:56:48 np0005597076.novalocal sudo[7791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:48 np0005597076.novalocal python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769500607.7192657-362-114266902982119/source _original_basename=tmpbeukiooj follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:56:48 np0005597076.novalocal sudo[7791]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:49 np0005597076.novalocal sudo[7841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubsxcmupchfknjgjxxnacpzyfuewstcf ; /usr/bin/python3'
Jan 27 07:56:49 np0005597076.novalocal sudo[7841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:49 np0005597076.novalocal python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 07:56:49 np0005597076.novalocal systemd[1]: Reloading.
Jan 27 07:56:49 np0005597076.novalocal systemd-rc-local-generator[7862]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 07:56:49 np0005597076.novalocal sudo[7841]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:51 np0005597076.novalocal sudo[7897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrwmxgdyapmutrpmpjpkzmokmlrknew ; /usr/bin/python3'
Jan 27 07:56:51 np0005597076.novalocal sudo[7897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:51 np0005597076.novalocal python3[7899]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 27 07:56:51 np0005597076.novalocal sudo[7897]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:51 np0005597076.novalocal sudo[7923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbfsuzswyrbgtdidigmglwucxmnsdrom ; /usr/bin/python3'
Jan 27 07:56:51 np0005597076.novalocal sudo[7923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:51 np0005597076.novalocal python3[7925]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:56:51 np0005597076.novalocal sudo[7923]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:51 np0005597076.novalocal sudo[7951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beexgyufbrkdzeknpfnfqtzalabwxrza ; /usr/bin/python3'
Jan 27 07:56:51 np0005597076.novalocal sudo[7951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:51 np0005597076.novalocal python3[7953]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:56:51 np0005597076.novalocal sudo[7951]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:51 np0005597076.novalocal sudo[7979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tryuxtaafyvunqlsflzeqtzgktkocwas ; /usr/bin/python3'
Jan 27 07:56:51 np0005597076.novalocal sudo[7979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:52 np0005597076.novalocal python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:56:52 np0005597076.novalocal sudo[7979]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:52 np0005597076.novalocal sudo[8007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gafjqzgrktrdofemahcbujgbcadhafha ; /usr/bin/python3'
Jan 27 07:56:52 np0005597076.novalocal sudo[8007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:52 np0005597076.novalocal python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:56:52 np0005597076.novalocal sudo[8007]: pam_unix(sudo:session): session closed for user root
Jan 27 07:56:53 np0005597076.novalocal python3[8036]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-2568-0839-000000000ca7-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:56:53 np0005597076.novalocal python3[8066]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 07:56:56 np0005597076.novalocal sshd-session[7483]: Connection closed by 38.102.83.114 port 46558
Jan 27 07:56:56 np0005597076.novalocal sshd-session[7480]: pam_unix(sshd:session): session closed for user zuul
Jan 27 07:56:56 np0005597076.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 27 07:56:56 np0005597076.novalocal systemd[1]: session-4.scope: Consumed 3.681s CPU time.
Jan 27 07:56:56 np0005597076.novalocal systemd-logind[799]: Session 4 logged out. Waiting for processes to exit.
Jan 27 07:56:56 np0005597076.novalocal systemd-logind[799]: Removed session 4.
Jan 27 07:56:58 np0005597076.novalocal sshd-session[8071]: Accepted publickey for zuul from 38.102.83.114 port 36646 ssh2: RSA SHA256:DNK1vimKiSKrooFcnqxgdgoquKxzk/KTmMzYIUmiqbw
Jan 27 07:56:58 np0005597076.novalocal systemd-logind[799]: New session 5 of user zuul.
Jan 27 07:56:58 np0005597076.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 27 07:56:58 np0005597076.novalocal sshd-session[8071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 07:56:58 np0005597076.novalocal sudo[8098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyxvldnuohmjsemehvopjkynhbnvrxsr ; /usr/bin/python3'
Jan 27 07:56:58 np0005597076.novalocal sudo[8098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:56:58 np0005597076.novalocal python3[8100]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 07:57:45 np0005597076.novalocal setsebool[8142]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 27 07:57:45 np0005597076.novalocal setsebool[8142]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 07:57:57 np0005597076.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 07:58:08 np0005597076.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 07:58:26 np0005597076.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 07:58:26 np0005597076.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 07:58:26 np0005597076.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 27 07:58:26 np0005597076.novalocal systemd[1]: Reloading.
Jan 27 07:58:26 np0005597076.novalocal systemd-rc-local-generator[8912]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 07:58:26 np0005597076.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 07:58:27 np0005597076.novalocal sudo[8098]: pam_unix(sudo:session): session closed for user root
Jan 27 07:58:32 np0005597076.novalocal sshd-session[13814]: Connection closed by 179.124.39.90 port 50644
Jan 27 07:59:07 np0005597076.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 07:59:07 np0005597076.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 27 07:59:07 np0005597076.novalocal systemd[1]: man-db-cache-update.service: Consumed 47.247s CPU time.
Jan 27 07:59:07 np0005597076.novalocal systemd[1]: run-r22ef892ae119401e8eba84e819440273.service: Deactivated successfully.
Jan 27 07:59:08 np0005597076.novalocal python3[29498]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-5d2f-e16d-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:59:09 np0005597076.novalocal kernel: evm: overlay not supported
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: Starting D-Bus User Message Bus...
Jan 27 07:59:09 np0005597076.novalocal dbus-broker-launch[29553]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 27 07:59:09 np0005597076.novalocal dbus-broker-launch[29553]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: Started D-Bus User Message Bus.
Jan 27 07:59:09 np0005597076.novalocal dbus-broker-lau[29553]: Ready
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: Created slice Slice /user.
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: podman-29536.scope: unit configures an IP firewall, but not running as root.
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: (This warning is only shown for the first unit using IP firewalling.)
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: Started podman-29536.scope.
Jan 27 07:59:09 np0005597076.novalocal systemd[4308]: Started podman-pause-e572eaeb.scope.
Jan 27 07:59:11 np0005597076.novalocal sudo[29582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqeqlxfblxigqhbaahaqkmjhygocwkjf ; /usr/bin/python3'
Jan 27 07:59:11 np0005597076.novalocal sudo[29582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:11 np0005597076.novalocal python3[29584]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.47:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.47:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:59:11 np0005597076.novalocal python3[29584]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 27 07:59:11 np0005597076.novalocal sudo[29582]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:12 np0005597076.novalocal sshd-session[8074]: Connection closed by 38.102.83.114 port 36646
Jan 27 07:59:12 np0005597076.novalocal sshd-session[8071]: pam_unix(sshd:session): session closed for user zuul
Jan 27 07:59:12 np0005597076.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 27 07:59:12 np0005597076.novalocal systemd[1]: session-5.scope: Consumed 45.720s CPU time.
Jan 27 07:59:12 np0005597076.novalocal systemd-logind[799]: Session 5 logged out. Waiting for processes to exit.
Jan 27 07:59:12 np0005597076.novalocal systemd-logind[799]: Removed session 5.
Jan 27 07:59:41 np0005597076.novalocal sshd-session[29588]: Unable to negotiate with 38.102.83.162 port 44544: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 07:59:41 np0005597076.novalocal sshd-session[29587]: Connection closed by 38.102.83.162 port 44524 [preauth]
Jan 27 07:59:41 np0005597076.novalocal sshd-session[29589]: Unable to negotiate with 38.102.83.162 port 44546: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 07:59:41 np0005597076.novalocal sshd-session[29591]: Connection closed by 38.102.83.162 port 44532 [preauth]
Jan 27 07:59:41 np0005597076.novalocal sshd-session[29590]: Unable to negotiate with 38.102.83.162 port 44562: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 07:59:42 np0005597076.novalocal sshd-session[29585]: error: maximum authentication attempts exceeded for root from 122.186.162.90 port 50150 ssh2 [preauth]
Jan 27 07:59:42 np0005597076.novalocal sshd-session[29585]: Disconnecting authenticating user root 122.186.162.90 port 50150: Too many authentication failures [preauth]
Jan 27 07:59:46 np0005597076.novalocal sshd-session[29599]: Accepted publickey for zuul from 38.102.83.114 port 50194 ssh2: RSA SHA256:DNK1vimKiSKrooFcnqxgdgoquKxzk/KTmMzYIUmiqbw
Jan 27 07:59:46 np0005597076.novalocal systemd-logind[799]: New session 6 of user zuul.
Jan 27 07:59:46 np0005597076.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 27 07:59:46 np0005597076.novalocal sshd-session[29599]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 07:59:47 np0005597076.novalocal python3[29626]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0zoYpue4rhiEwUY0BAS4uWW7aaOyxCIVIt95DPdff6IVkPvzCcY312sBvy2jVrUjFhxZOi5gtnokIYP3kNwFE= zuul@np0005597075.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:59:47 np0005597076.novalocal sudo[29650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqncsonxgzhumgiywteovjfraousrekk ; /usr/bin/python3'
Jan 27 07:59:47 np0005597076.novalocal sudo[29650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:47 np0005597076.novalocal python3[29652]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0zoYpue4rhiEwUY0BAS4uWW7aaOyxCIVIt95DPdff6IVkPvzCcY312sBvy2jVrUjFhxZOi5gtnokIYP3kNwFE= zuul@np0005597075.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:59:47 np0005597076.novalocal sudo[29650]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:48 np0005597076.novalocal sudo[29676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslwodamgoblavobjkimxduwuojucqze ; /usr/bin/python3'
Jan 27 07:59:48 np0005597076.novalocal sudo[29676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:48 np0005597076.novalocal python3[29678]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005597076.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 27 07:59:48 np0005597076.novalocal useradd[29680]: new group: name=cloud-admin, GID=1002
Jan 27 07:59:48 np0005597076.novalocal useradd[29680]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 27 07:59:48 np0005597076.novalocal sudo[29676]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:49 np0005597076.novalocal sudo[29710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphhyugxjatccvpwgxitpkblglsvyrbw ; /usr/bin/python3'
Jan 27 07:59:49 np0005597076.novalocal sudo[29710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:50 np0005597076.novalocal python3[29712]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0zoYpue4rhiEwUY0BAS4uWW7aaOyxCIVIt95DPdff6IVkPvzCcY312sBvy2jVrUjFhxZOi5gtnokIYP3kNwFE= zuul@np0005597075.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:59:50 np0005597076.novalocal sudo[29710]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:50 np0005597076.novalocal sudo[29788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuicbgtchsrxavqidmsvuieaziofhgqm ; /usr/bin/python3'
Jan 27 07:59:50 np0005597076.novalocal sudo[29788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:50 np0005597076.novalocal python3[29790]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:59:50 np0005597076.novalocal sudo[29788]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:50 np0005597076.novalocal sudo[29861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsvrdqpfalqpsqsdwyolonjhekkhcejq ; /usr/bin/python3'
Jan 27 07:59:50 np0005597076.novalocal sudo[29861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:51 np0005597076.novalocal python3[29863]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769500790.247845-167-198561772320646/source _original_basename=tmp_5m4xhym follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:59:51 np0005597076.novalocal sudo[29861]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:51 np0005597076.novalocal sshd-session[29597]: error: maximum authentication attempts exceeded for root from 122.186.162.90 port 50258 ssh2 [preauth]
Jan 27 07:59:51 np0005597076.novalocal sshd-session[29597]: Disconnecting authenticating user root 122.186.162.90 port 50258: Too many authentication failures [preauth]
Jan 27 07:59:51 np0005597076.novalocal sudo[29911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutpszmbkemzasrgjgrpevhwhgoqjyyu ; /usr/bin/python3'
Jan 27 07:59:51 np0005597076.novalocal sudo[29911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 07:59:51 np0005597076.novalocal python3[29913]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 27 07:59:51 np0005597076.novalocal systemd[1]: Starting Hostname Service...
Jan 27 07:59:52 np0005597076.novalocal systemd[1]: Started Hostname Service.
Jan 27 07:59:52 np0005597076.novalocal systemd-hostnamed[29917]: Changed pretty hostname to 'compute-0'
Jan 27 07:59:52 compute-0 systemd-hostnamed[29917]: Hostname set to <compute-0> (static)
Jan 27 07:59:52 compute-0 NetworkManager[7198]: <info>  [1769500792.0861] hostname: static hostname changed from "np0005597076.novalocal" to "compute-0"
Jan 27 07:59:52 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:59:52 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:59:52 compute-0 sudo[29911]: pam_unix(sudo:session): session closed for user root
Jan 27 07:59:52 compute-0 sshd-session[29602]: Connection closed by 38.102.83.114 port 50194
Jan 27 07:59:52 compute-0 sshd-session[29599]: pam_unix(sshd:session): session closed for user zuul
Jan 27 07:59:52 compute-0 systemd-logind[799]: Session 6 logged out. Waiting for processes to exit.
Jan 27 07:59:52 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 27 07:59:52 compute-0 systemd[1]: session-6.scope: Consumed 2.058s CPU time.
Jan 27 07:59:52 compute-0 systemd-logind[799]: Removed session 6.
Jan 27 07:59:59 compute-0 sshd-session[29931]: error: maximum authentication attempts exceeded for root from 122.186.162.90 port 50370 ssh2 [preauth]
Jan 27 07:59:59 compute-0 sshd-session[29931]: Disconnecting authenticating user root 122.186.162.90 port 50370: Too many authentication failures [preauth]
Jan 27 08:00:02 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 08:00:06 compute-0 sshd-session[29933]: Received disconnect from 122.186.162.90 port 50476:11: disconnected by user [preauth]
Jan 27 08:00:06 compute-0 sshd-session[29933]: Disconnected from authenticating user root 122.186.162.90 port 50476 [preauth]
Jan 27 08:00:14 compute-0 sshd-session[29936]: Invalid user admin from 122.186.162.90 port 50566
Jan 27 08:00:15 compute-0 sshd-session[29936]: error: maximum authentication attempts exceeded for invalid user admin from 122.186.162.90 port 50566 ssh2 [preauth]
Jan 27 08:00:15 compute-0 sshd-session[29936]: Disconnecting invalid user admin 122.186.162.90 port 50566: Too many authentication failures [preauth]
Jan 27 08:00:22 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 08:00:23 compute-0 sshd-session[29939]: Invalid user admin from 122.186.162.90 port 50694
Jan 27 08:00:24 compute-0 sshd-session[29939]: error: maximum authentication attempts exceeded for invalid user admin from 122.186.162.90 port 50694 ssh2 [preauth]
Jan 27 08:00:24 compute-0 sshd-session[29939]: Disconnecting invalid user admin 122.186.162.90 port 50694: Too many authentication failures [preauth]
Jan 27 08:00:32 compute-0 sshd-session[29944]: Invalid user admin from 122.186.162.90 port 50828
Jan 27 08:00:33 compute-0 sshd-session[29944]: Received disconnect from 122.186.162.90 port 50828:11: disconnected by user [preauth]
Jan 27 08:00:33 compute-0 sshd-session[29944]: Disconnected from invalid user admin 122.186.162.90 port 50828 [preauth]
Jan 27 08:00:40 compute-0 sshd-session[29946]: Invalid user oracle from 122.186.162.90 port 50932
Jan 27 08:00:42 compute-0 sshd-session[29946]: error: maximum authentication attempts exceeded for invalid user oracle from 122.186.162.90 port 50932 ssh2 [preauth]
Jan 27 08:00:42 compute-0 sshd-session[29946]: Disconnecting invalid user oracle 122.186.162.90 port 50932: Too many authentication failures [preauth]
Jan 27 08:00:49 compute-0 sshd-session[29948]: Invalid user oracle from 122.186.162.90 port 51062
Jan 27 08:00:50 compute-0 sshd-session[29948]: error: maximum authentication attempts exceeded for invalid user oracle from 122.186.162.90 port 51062 ssh2 [preauth]
Jan 27 08:00:50 compute-0 sshd-session[29948]: Disconnecting invalid user oracle 122.186.162.90 port 51062: Too many authentication failures [preauth]
Jan 27 08:00:58 compute-0 sshd-session[29950]: Invalid user oracle from 122.186.162.90 port 51174
Jan 27 08:00:59 compute-0 sshd-session[29950]: Received disconnect from 122.186.162.90 port 51174:11: disconnected by user [preauth]
Jan 27 08:00:59 compute-0 sshd-session[29950]: Disconnected from invalid user oracle 122.186.162.90 port 51174 [preauth]
Jan 27 08:01:01 compute-0 CROND[29955]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 08:01:01 compute-0 run-parts[29958]: (/etc/cron.hourly) starting 0anacron
Jan 27 08:01:01 compute-0 anacron[29966]: Anacron started on 2026-01-27
Jan 27 08:01:01 compute-0 anacron[29966]: Will run job `cron.daily' in 34 min.
Jan 27 08:01:01 compute-0 anacron[29966]: Will run job `cron.weekly' in 54 min.
Jan 27 08:01:01 compute-0 anacron[29966]: Will run job `cron.monthly' in 74 min.
Jan 27 08:01:01 compute-0 anacron[29966]: Jobs will be executed sequentially
Jan 27 08:01:01 compute-0 run-parts[29968]: (/etc/cron.hourly) finished 0anacron
Jan 27 08:01:01 compute-0 CROND[29954]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 08:01:06 compute-0 sshd-session[29952]: Invalid user usuario from 122.186.162.90 port 51318
Jan 27 08:01:07 compute-0 sshd-session[29952]: error: maximum authentication attempts exceeded for invalid user usuario from 122.186.162.90 port 51318 ssh2 [preauth]
Jan 27 08:01:07 compute-0 sshd-session[29952]: Disconnecting invalid user usuario 122.186.162.90 port 51318: Too many authentication failures [preauth]
Jan 27 08:01:14 compute-0 sshd-session[29969]: Invalid user usuario from 122.186.162.90 port 51432
Jan 27 08:01:15 compute-0 sshd-session[29969]: error: maximum authentication attempts exceeded for invalid user usuario from 122.186.162.90 port 51432 ssh2 [preauth]
Jan 27 08:01:15 compute-0 sshd-session[29969]: Disconnecting invalid user usuario 122.186.162.90 port 51432: Too many authentication failures [preauth]
Jan 27 08:01:22 compute-0 sshd-session[29971]: Invalid user usuario from 122.186.162.90 port 51544
Jan 27 08:01:23 compute-0 sshd-session[29971]: Received disconnect from 122.186.162.90 port 51544:11: disconnected by user [preauth]
Jan 27 08:01:23 compute-0 sshd-session[29971]: Disconnected from invalid user usuario 122.186.162.90 port 51544 [preauth]
Jan 27 08:01:30 compute-0 sshd-session[29973]: Invalid user test from 122.186.162.90 port 51648
Jan 27 08:01:32 compute-0 sshd-session[29973]: error: maximum authentication attempts exceeded for invalid user test from 122.186.162.90 port 51648 ssh2 [preauth]
Jan 27 08:01:32 compute-0 sshd-session[29973]: Disconnecting invalid user test 122.186.162.90 port 51648: Too many authentication failures [preauth]
Jan 27 08:01:40 compute-0 sshd-session[29975]: Invalid user test from 122.186.162.90 port 51792
Jan 27 08:01:41 compute-0 sshd-session[29975]: error: maximum authentication attempts exceeded for invalid user test from 122.186.162.90 port 51792 ssh2 [preauth]
Jan 27 08:01:41 compute-0 sshd-session[29975]: Disconnecting invalid user test 122.186.162.90 port 51792: Too many authentication failures [preauth]
Jan 27 08:01:49 compute-0 sshd-session[29977]: Invalid user test from 122.186.162.90 port 51906
Jan 27 08:01:50 compute-0 sshd-session[29977]: Received disconnect from 122.186.162.90 port 51906:11: disconnected by user [preauth]
Jan 27 08:01:50 compute-0 sshd-session[29977]: Disconnected from invalid user test 122.186.162.90 port 51906 [preauth]
Jan 27 08:01:59 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 27 08:01:59 compute-0 sshd-session[29979]: Invalid user user from 122.186.162.90 port 52022
Jan 27 08:01:59 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 27 08:01:59 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 27 08:01:59 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 27 08:02:00 compute-0 sshd-session[29979]: error: maximum authentication attempts exceeded for invalid user user from 122.186.162.90 port 52022 ssh2 [preauth]
Jan 27 08:02:00 compute-0 sshd-session[29979]: Disconnecting invalid user user 122.186.162.90 port 52022: Too many authentication failures [preauth]
Jan 27 08:02:09 compute-0 sshd-session[29985]: Invalid user user from 122.186.162.90 port 52198
Jan 27 08:02:11 compute-0 sshd-session[29985]: error: maximum authentication attempts exceeded for invalid user user from 122.186.162.90 port 52198 ssh2 [preauth]
Jan 27 08:02:11 compute-0 sshd-session[29985]: Disconnecting invalid user user 122.186.162.90 port 52198: Too many authentication failures [preauth]
Jan 27 08:02:20 compute-0 sshd-session[29988]: Invalid user user from 122.186.162.90 port 52342
Jan 27 08:02:21 compute-0 sshd-session[29988]: Received disconnect from 122.186.162.90 port 52342:11: disconnected by user [preauth]
Jan 27 08:02:21 compute-0 sshd-session[29988]: Disconnected from invalid user user 122.186.162.90 port 52342 [preauth]
Jan 27 08:02:30 compute-0 sshd-session[29991]: Invalid user ftpuser from 122.186.162.90 port 52484
Jan 27 08:02:32 compute-0 sshd-session[29991]: error: maximum authentication attempts exceeded for invalid user ftpuser from 122.186.162.90 port 52484 ssh2 [preauth]
Jan 27 08:02:32 compute-0 sshd-session[29991]: Disconnecting invalid user ftpuser 122.186.162.90 port 52484: Too many authentication failures [preauth]
Jan 27 08:02:40 compute-0 sshd-session[29993]: Invalid user ftpuser from 122.186.162.90 port 52638
Jan 27 08:02:42 compute-0 sshd-session[29993]: error: maximum authentication attempts exceeded for invalid user ftpuser from 122.186.162.90 port 52638 ssh2 [preauth]
Jan 27 08:02:42 compute-0 sshd-session[29993]: Disconnecting invalid user ftpuser 122.186.162.90 port 52638: Too many authentication failures [preauth]
Jan 27 08:02:51 compute-0 sshd-session[29995]: Invalid user ftpuser from 122.186.162.90 port 52784
Jan 27 08:02:52 compute-0 sshd-session[29995]: Received disconnect from 122.186.162.90 port 52784:11: disconnected by user [preauth]
Jan 27 08:02:52 compute-0 sshd-session[29995]: Disconnected from invalid user ftpuser 122.186.162.90 port 52784 [preauth]
Jan 27 08:03:01 compute-0 sshd-session[29997]: Invalid user test1 from 122.186.162.90 port 52928
Jan 27 08:03:02 compute-0 sshd-session[29997]: error: maximum authentication attempts exceeded for invalid user test1 from 122.186.162.90 port 52928 ssh2 [preauth]
Jan 27 08:03:02 compute-0 sshd-session[29997]: Disconnecting invalid user test1 122.186.162.90 port 52928: Too many authentication failures [preauth]
Jan 27 08:03:11 compute-0 sshd-session[29999]: Invalid user test1 from 122.186.162.90 port 53058
Jan 27 08:03:12 compute-0 sshd-session[29999]: error: maximum authentication attempts exceeded for invalid user test1 from 122.186.162.90 port 53058 ssh2 [preauth]
Jan 27 08:03:12 compute-0 sshd-session[29999]: Disconnecting invalid user test1 122.186.162.90 port 53058: Too many authentication failures [preauth]
Jan 27 08:03:22 compute-0 sshd-session[30001]: Invalid user test1 from 122.186.162.90 port 53200
Jan 27 08:03:23 compute-0 sshd-session[30001]: Received disconnect from 122.186.162.90 port 53200:11: disconnected by user [preauth]
Jan 27 08:03:23 compute-0 sshd-session[30001]: Disconnected from invalid user test1 122.186.162.90 port 53200 [preauth]
Jan 27 08:03:32 compute-0 sshd-session[30003]: Invalid user test2 from 122.186.162.90 port 53346
Jan 27 08:03:34 compute-0 sshd-session[30003]: error: maximum authentication attempts exceeded for invalid user test2 from 122.186.162.90 port 53346 ssh2 [preauth]
Jan 27 08:03:34 compute-0 sshd-session[30003]: Disconnecting invalid user test2 122.186.162.90 port 53346: Too many authentication failures [preauth]
Jan 27 08:03:43 compute-0 sshd-session[30005]: Invalid user test2 from 122.186.162.90 port 53484
Jan 27 08:03:44 compute-0 sshd-session[30005]: error: maximum authentication attempts exceeded for invalid user test2 from 122.186.162.90 port 53484 ssh2 [preauth]
Jan 27 08:03:44 compute-0 sshd-session[30005]: Disconnecting invalid user test2 122.186.162.90 port 53484: Too many authentication failures [preauth]
Jan 27 08:03:54 compute-0 sshd-session[30007]: Invalid user test2 from 122.186.162.90 port 53644
Jan 27 08:03:55 compute-0 sshd-session[30007]: Received disconnect from 122.186.162.90 port 53644:11: disconnected by user [preauth]
Jan 27 08:03:55 compute-0 sshd-session[30007]: Disconnected from invalid user test2 122.186.162.90 port 53644 [preauth]
Jan 27 08:04:04 compute-0 sshd-session[30009]: Invalid user ubuntu from 122.186.162.90 port 53788
Jan 27 08:04:05 compute-0 sshd-session[30009]: error: maximum authentication attempts exceeded for invalid user ubuntu from 122.186.162.90 port 53788 ssh2 [preauth]
Jan 27 08:04:05 compute-0 sshd-session[30009]: Disconnecting invalid user ubuntu 122.186.162.90 port 53788: Too many authentication failures [preauth]
Jan 27 08:04:13 compute-0 sshd-session[30011]: Invalid user ubuntu from 122.186.162.90 port 53902
Jan 27 08:04:15 compute-0 sshd-session[30011]: error: maximum authentication attempts exceeded for invalid user ubuntu from 122.186.162.90 port 53902 ssh2 [preauth]
Jan 27 08:04:15 compute-0 sshd-session[30011]: Disconnecting invalid user ubuntu 122.186.162.90 port 53902: Too many authentication failures [preauth]
Jan 27 08:04:23 compute-0 sshd-session[30013]: Invalid user ubuntu from 122.186.162.90 port 54012
Jan 27 08:04:24 compute-0 sshd-session[30013]: Received disconnect from 122.186.162.90 port 54012:11: disconnected by user [preauth]
Jan 27 08:04:24 compute-0 sshd-session[30013]: Disconnected from invalid user ubuntu 122.186.162.90 port 54012 [preauth]
Jan 27 08:04:32 compute-0 sshd-session[30015]: Invalid user pi from 122.186.162.90 port 54144
Jan 27 08:04:33 compute-0 sshd-session[30015]: Received disconnect from 122.186.162.90 port 54144:11: disconnected by user [preauth]
Jan 27 08:04:33 compute-0 sshd-session[30015]: Disconnected from invalid user pi 122.186.162.90 port 54144 [preauth]
Jan 27 08:04:37 compute-0 sshd-session[30019]: Accepted publickey for zuul from 38.102.83.162 port 51006 ssh2: RSA SHA256:DNK1vimKiSKrooFcnqxgdgoquKxzk/KTmMzYIUmiqbw
Jan 27 08:04:37 compute-0 systemd-logind[799]: New session 7 of user zuul.
Jan 27 08:04:37 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 27 08:04:37 compute-0 sshd-session[30019]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:04:37 compute-0 python3[30095]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:04:39 compute-0 sudo[30209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmsmuzpeorjkhdijriseitulpfveslau ; /usr/bin/python3'
Jan 27 08:04:39 compute-0 sudo[30209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:39 compute-0 python3[30211]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:39 compute-0 sudo[30209]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:40 compute-0 sudo[30282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-johhfsrcheouoveolafttxxjhabjecxm ; /usr/bin/python3'
Jan 27 08:04:40 compute-0 sudo[30282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:40 compute-0 python3[30284]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:40 compute-0 sudo[30282]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:40 compute-0 sudo[30308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpyttfnwybabwusxcamkwiyxsndrufht ; /usr/bin/python3'
Jan 27 08:04:40 compute-0 sudo[30308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:40 compute-0 python3[30310]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:40 compute-0 sudo[30308]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:40 compute-0 sudo[30381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmnqchktstkcryogcudppjpgcziekvu ; /usr/bin/python3'
Jan 27 08:04:40 compute-0 sudo[30381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:40 compute-0 python3[30383]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:40 compute-0 sudo[30381]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:40 compute-0 sudo[30407]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgkjbzuddjrrgzvtgkydcztsaggtcvg ; /usr/bin/python3'
Jan 27 08:04:40 compute-0 sudo[30407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:41 compute-0 python3[30409]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:41 compute-0 sudo[30407]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:41 compute-0 sudo[30480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qofawpuofhifyujsouawxvxklboeggvr ; /usr/bin/python3'
Jan 27 08:04:41 compute-0 sudo[30480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:41 compute-0 python3[30482]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:41 compute-0 sudo[30480]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:41 compute-0 sudo[30506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxelbryapptwymvlqnhlvrhawodjules ; /usr/bin/python3'
Jan 27 08:04:41 compute-0 sudo[30506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:41 compute-0 python3[30508]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:41 compute-0 sudo[30506]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:41 compute-0 sudo[30579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggkoavadwtllvyqboskmfagrqernrukv ; /usr/bin/python3'
Jan 27 08:04:41 compute-0 sudo[30579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:41 compute-0 python3[30581]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:41 compute-0 sudo[30579]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:41 compute-0 sudo[30605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkksajtttrzoubeyerijeogsnjhhaljm ; /usr/bin/python3'
Jan 27 08:04:41 compute-0 sudo[30605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:42 compute-0 python3[30607]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:42 compute-0 sudo[30605]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:42 compute-0 sudo[30678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aafgkhzzmdgjeygsoapqwbavibdyxisb ; /usr/bin/python3'
Jan 27 08:04:42 compute-0 sudo[30678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:42 compute-0 python3[30680]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:42 compute-0 sudo[30678]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:42 compute-0 sudo[30704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndinjeqsnyjilsobtqlnmfztvqchejzd ; /usr/bin/python3'
Jan 27 08:04:42 compute-0 sudo[30704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:42 compute-0 python3[30706]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:42 compute-0 sudo[30704]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:42 compute-0 sudo[30777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myexvtnqcnrzswiumdgovacuujnqendp ; /usr/bin/python3'
Jan 27 08:04:42 compute-0 sudo[30777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:42 compute-0 sshd-session[30017]: Invalid user baikal from 122.186.162.90 port 54260
Jan 27 08:04:43 compute-0 python3[30779]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:43 compute-0 sudo[30777]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:43 compute-0 sudo[30803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvpmswxnnvwvyypxsgionaetofshhqtm ; /usr/bin/python3'
Jan 27 08:04:43 compute-0 sudo[30803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:43 compute-0 sshd-session[30017]: Received disconnect from 122.186.162.90 port 54260:11: disconnected by user [preauth]
Jan 27 08:04:43 compute-0 sshd-session[30017]: Disconnected from invalid user baikal 122.186.162.90 port 54260 [preauth]
Jan 27 08:04:43 compute-0 python3[30805]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:04:43 compute-0 sudo[30803]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:43 compute-0 sudo[30876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfbsitwsspjqawqueoxdxknsauwglawr ; /usr/bin/python3'
Jan 27 08:04:43 compute-0 sudo[30876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:04:43 compute-0 python3[30878]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769501079.4724624-34004-201372960082524/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:43 compute-0 sudo[30876]: pam_unix(sudo:session): session closed for user root
Jan 27 08:04:46 compute-0 sshd-session[30903]: Unable to negotiate with 192.168.122.11 port 38172: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 08:04:46 compute-0 sshd-session[30904]: Connection closed by 192.168.122.11 port 38150 [preauth]
Jan 27 08:04:46 compute-0 sshd-session[30907]: Connection closed by 192.168.122.11 port 38158 [preauth]
Jan 27 08:04:46 compute-0 sshd-session[30905]: Unable to negotiate with 192.168.122.11 port 38188: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 08:04:46 compute-0 sshd-session[30906]: Unable to negotiate with 192.168.122.11 port 38190: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 08:04:56 compute-0 python3[30936]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:08:52 compute-0 sshd-session[30941]: error: kex_exchange_identification: read: Connection reset by peer
Jan 27 08:08:52 compute-0 sshd-session[30941]: Connection reset by 176.120.22.52 port 22267
Jan 27 08:09:55 compute-0 sshd-session[30022]: Received disconnect from 38.102.83.162 port 51006:11: disconnected by user
Jan 27 08:09:55 compute-0 sshd-session[30022]: Disconnected from user zuul 38.102.83.162 port 51006
Jan 27 08:09:55 compute-0 sshd-session[30019]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:09:55 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 27 08:09:55 compute-0 systemd[1]: session-7.scope: Consumed 4.493s CPU time.
Jan 27 08:09:55 compute-0 systemd-logind[799]: Session 7 logged out. Waiting for processes to exit.
Jan 27 08:09:55 compute-0 systemd-logind[799]: Removed session 7.
Jan 27 08:17:58 compute-0 sshd-session[30946]: Accepted publickey for zuul from 192.168.122.30 port 42070 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:17:58 compute-0 systemd-logind[799]: New session 8 of user zuul.
Jan 27 08:17:58 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 27 08:17:58 compute-0 sshd-session[30946]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:17:59 compute-0 python3.9[31099]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:00 compute-0 sudo[31278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzdjrbepmetcicipfftzqjfcmgtjnej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501880.4271417-56-277338744427890/AnsiballZ_command.py'
Jan 27 08:18:00 compute-0 sudo[31278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:00 compute-0 python3.9[31280]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:18:08 compute-0 sudo[31278]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:12 compute-0 sshd-session[30949]: Connection closed by 192.168.122.30 port 42070
Jan 27 08:18:12 compute-0 sshd-session[30946]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:18:12 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 27 08:18:12 compute-0 systemd[1]: session-8.scope: Consumed 7.674s CPU time.
Jan 27 08:18:12 compute-0 systemd-logind[799]: Session 8 logged out. Waiting for processes to exit.
Jan 27 08:18:12 compute-0 systemd-logind[799]: Removed session 8.
Jan 27 08:18:28 compute-0 sshd-session[31338]: Accepted publickey for zuul from 192.168.122.30 port 59964 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:18:28 compute-0 systemd-logind[799]: New session 9 of user zuul.
Jan 27 08:18:28 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 27 08:18:28 compute-0 sshd-session[31338]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:18:28 compute-0 python3.9[31491]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 08:18:30 compute-0 python3.9[31665]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:30 compute-0 sudo[31815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bprgjbhybfrjsofhzqlmhzgaqeiprrks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501910.5460534-93-161005520966259/AnsiballZ_command.py'
Jan 27 08:18:30 compute-0 sudo[31815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:31 compute-0 python3.9[31817]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:18:31 compute-0 sudo[31815]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:32 compute-0 sudo[31968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibvnzjsjmkhzycghqnwykqpzdkonslpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501911.693821-129-229574570607935/AnsiballZ_stat.py'
Jan 27 08:18:32 compute-0 sudo[31968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:32 compute-0 python3.9[31970]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:18:32 compute-0 sudo[31968]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:33 compute-0 sudo[32120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwskgzcmbsnufsndacimyevvxrhzvoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501912.8513348-153-201358136973049/AnsiballZ_file.py'
Jan 27 08:18:33 compute-0 sudo[32120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:33 compute-0 python3.9[32122]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:18:33 compute-0 sudo[32120]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:34 compute-0 sudo[32272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amiktbozuierynlsefhgznymfyidglzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501913.7327704-177-167359023086318/AnsiballZ_stat.py'
Jan 27 08:18:34 compute-0 sudo[32272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:34 compute-0 python3.9[32274]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:18:34 compute-0 sudo[32272]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:34 compute-0 sudo[32395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aperdhouvioieglkprbdolyaolgxvpxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501913.7327704-177-167359023086318/AnsiballZ_copy.py'
Jan 27 08:18:34 compute-0 sudo[32395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:34 compute-0 python3.9[32397]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769501913.7327704-177-167359023086318/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:18:34 compute-0 sudo[32395]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:35 compute-0 sudo[32547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncxswsqimjlrsgeifyltaecpvnhuief ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501915.0965283-222-252670899290955/AnsiballZ_setup.py'
Jan 27 08:18:35 compute-0 sudo[32547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:35 compute-0 python3.9[32549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:35 compute-0 sudo[32547]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:36 compute-0 sudo[32703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfxlshdmjzxealjwylbyxhldkankreeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501916.1116705-246-70861139626284/AnsiballZ_file.py'
Jan 27 08:18:36 compute-0 sudo[32703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:36 compute-0 python3.9[32705]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:36 compute-0 sudo[32703]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:37 compute-0 sudo[32856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzyupanqhflnlcxfwrgitzcsnnqximoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501917.0773387-273-120015030382598/AnsiballZ_file.py'
Jan 27 08:18:37 compute-0 sudo[32856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:37 compute-0 python3.9[32858]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:37 compute-0 sudo[32856]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:38 compute-0 python3.9[33008]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:18:42 compute-0 python3.9[33262]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:18:43 compute-0 python3.9[33412]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:44 compute-0 python3.9[33566]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:45 compute-0 sudo[33722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgzkimfjuhjkajvmnmkqsniwhlntbqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501925.700842-417-110381940301878/AnsiballZ_setup.py'
Jan 27 08:18:45 compute-0 sudo[33722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:46 compute-0 python3.9[33724]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:18:46 compute-0 sudo[33722]: pam_unix(sudo:session): session closed for user root
Jan 27 08:18:46 compute-0 sudo[33806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atucvjoytfxmptmtjbzjyciomdrwyjwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769501925.700842-417-110381940301878/AnsiballZ_dnf.py'
Jan 27 08:18:46 compute-0 sudo[33806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:18:47 compute-0 python3.9[33808]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:19:32 compute-0 systemd[1]: Reloading.
Jan 27 08:19:32 compute-0 systemd-rc-local-generator[34007]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:19:33 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 27 08:19:33 compute-0 systemd[1]: Reloading.
Jan 27 08:19:33 compute-0 systemd-rc-local-generator[34047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:19:33 compute-0 systemd[1]: Starting dnf makecache...
Jan 27 08:19:33 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 27 08:19:33 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 27 08:19:33 compute-0 systemd[1]: Reloading.
Jan 27 08:19:33 compute-0 systemd-rc-local-generator[34090]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:19:33 compute-0 dnf[34058]: Failed determining last makecache time.
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-openstack-barbican-42b4c41831408a8e323 159 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 167 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-openstack-cinder-1c00d6490d88e436f26ef 178 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-python-stevedore-c4acc5639fd2329372142 170 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-python-cloudkitty-tests-tempest-2c80f8 170 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-os-refresh-config-9bfc52b5049be2d8de61 178 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 164 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-python-designate-tests-tempest-347fdbc 171 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dnf[34058]: delorean-openstack-glance-1fd12c29b339f30fe823e 167 kB/s | 3.0 kB     00:00
Jan 27 08:19:33 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:19:33 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:19:34 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 166 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-openstack-manila-3c01b7181572c95dac462 179 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-python-whitebox-neutron-tests-tempest- 193 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-openstack-octavia-ba397f07a7331190208c 164 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-openstack-watcher-c014f81a8647287f6dcc 178 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-ansible-config_template-5ccaa22121a7ff 183 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 172 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-openstack-swift-dc98a8463506ac520c469a 180 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-python-tempestconf-8515371b7cceebd4282 173 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: delorean-openstack-heat-ui-013accbfd179753bc3f0 159 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: CentOS Stream 9 - BaseOS                         66 kB/s | 6.7 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: CentOS Stream 9 - AppStream                      61 kB/s | 6.8 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: CentOS Stream 9 - CRB                            59 kB/s | 6.6 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: CentOS Stream 9 - Extras packages                72 kB/s | 7.3 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: dlrn-antelope-testing                           176 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: dlrn-antelope-build-deps                        170 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: centos9-rabbitmq                                124 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: centos9-storage                                 119 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: centos9-opstools                                130 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: NFV SIG OpenvSwitch                             123 kB/s | 3.0 kB     00:00
Jan 27 08:19:34 compute-0 dnf[34058]: repo-setup-centos-appstream                     180 kB/s | 4.4 kB     00:00
Jan 27 08:19:35 compute-0 dnf[34058]: repo-setup-centos-baseos                        158 kB/s | 3.9 kB     00:00
Jan 27 08:19:35 compute-0 dnf[34058]: repo-setup-centos-highavailability              174 kB/s | 3.9 kB     00:00
Jan 27 08:19:35 compute-0 dnf[34058]: repo-setup-centos-powertools                    165 kB/s | 4.3 kB     00:00
Jan 27 08:19:35 compute-0 dnf[34058]: Extra Packages for Enterprise Linux 9 - x86_64  269 kB/s |  33 kB     00:00
Jan 27 08:19:35 compute-0 dnf[34058]: Metadata cache created.
Jan 27 08:19:36 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 08:19:36 compute-0 systemd[1]: Finished dnf makecache.
Jan 27 08:19:36 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.698s CPU time.
Jan 27 08:20:52 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:20:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:20:53 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 27 08:20:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:20:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:20:53 compute-0 systemd[1]: Reloading.
Jan 27 08:20:53 compute-0 systemd-rc-local-generator[34460]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:20:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:20:53 compute-0 sudo[33806]: pam_unix(sudo:session): session closed for user root
Jan 27 08:20:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:20:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:20:54 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.055s CPU time.
Jan 27 08:20:54 compute-0 systemd[1]: run-r248b39b58b1546c0b1a245232f5fdf93.service: Deactivated successfully.
Jan 27 08:20:54 compute-0 sudo[35374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcskhmzccrmegzhzpjnbhstgxycboogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502053.9980354-453-97003424724173/AnsiballZ_command.py'
Jan 27 08:20:54 compute-0 sudo[35374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:20:54 compute-0 python3.9[35376]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:20:55 compute-0 sudo[35374]: pam_unix(sudo:session): session closed for user root
Jan 27 08:20:56 compute-0 sudo[35655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cywnwwkkbviieqagwzhdegsrgaolmbjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502056.1798344-477-260674893221568/AnsiballZ_selinux.py'
Jan 27 08:20:56 compute-0 sudo[35655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:20:57 compute-0 python3.9[35657]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 08:20:57 compute-0 sudo[35655]: pam_unix(sudo:session): session closed for user root
Jan 27 08:20:57 compute-0 sudo[35807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kerazvmcofsbtzlnenlkbinkhrmxnqan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502057.5851994-510-143025582594059/AnsiballZ_command.py'
Jan 27 08:20:57 compute-0 sudo[35807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:20:58 compute-0 python3.9[35809]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 08:20:58 compute-0 sudo[35807]: pam_unix(sudo:session): session closed for user root
Jan 27 08:20:59 compute-0 sudo[35960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqjyczpvayydpvxntvmdhkzrnnvthjkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502059.2757812-534-18097854407873/AnsiballZ_file.py'
Jan 27 08:20:59 compute-0 sudo[35960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:00 compute-0 python3.9[35962]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:00 compute-0 sudo[35960]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:00 compute-0 sudo[36112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqufcmvyfxcatlrwdimkcmuzirntsqcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502060.4489202-558-230667650058891/AnsiballZ_mount.py'
Jan 27 08:21:00 compute-0 sudo[36112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:01 compute-0 python3.9[36114]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 08:21:01 compute-0 sudo[36112]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:02 compute-0 sudo[36264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nusidrriuecnxrkraggbjpcoxitdxjgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502062.0833638-642-36555705765048/AnsiballZ_file.py'
Jan 27 08:21:02 compute-0 sudo[36264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:02 compute-0 python3.9[36266]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:02 compute-0 sudo[36264]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:03 compute-0 sudo[36416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnqccyhdnmmxiviypmeshccskvjgeimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502062.7847466-666-29958362222010/AnsiballZ_stat.py'
Jan 27 08:21:03 compute-0 sudo[36416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:03 compute-0 python3.9[36418]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:03 compute-0 sudo[36416]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:03 compute-0 sudo[36539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tevcyqcqsraqygganurjzysaszrigzax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502062.7847466-666-29958362222010/AnsiballZ_copy.py'
Jan 27 08:21:03 compute-0 sudo[36539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:03 compute-0 python3.9[36541]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502062.7847466-666-29958362222010/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:03 compute-0 sudo[36539]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:08 compute-0 sudo[36692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpyjcogdyfifzwrvpfydenntyzziwrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502068.5408287-738-205696916185431/AnsiballZ_stat.py'
Jan 27 08:21:08 compute-0 sudo[36692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:09 compute-0 python3.9[36694]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:21:09 compute-0 sudo[36692]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:09 compute-0 sudo[36844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxanmmbfmrtzmlplrvbnpsizioeuyxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502069.3434281-762-14297196443842/AnsiballZ_command.py'
Jan 27 08:21:09 compute-0 sudo[36844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:09 compute-0 python3.9[36846]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:09 compute-0 sudo[36844]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:10 compute-0 sudo[36997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sioecaabgaqamaevnjzmkvggwhkwvgno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502070.0958405-786-80877766999107/AnsiballZ_file.py'
Jan 27 08:21:10 compute-0 sudo[36997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:10 compute-0 python3.9[36999]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:10 compute-0 sudo[36997]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:11 compute-0 sudo[37149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uychgneoqbkcsxarbedajmvmhykaijyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502071.2226462-819-24190353250577/AnsiballZ_getent.py'
Jan 27 08:21:11 compute-0 sudo[37149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:11 compute-0 python3.9[37151]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 08:21:11 compute-0 sudo[37149]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:11 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:21:12 compute-0 sudo[37303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgbednptrslbpgfgeizbgfegqjelcjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502072.0668716-843-206998427832183/AnsiballZ_group.py'
Jan 27 08:21:12 compute-0 sudo[37303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:12 compute-0 python3.9[37305]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:21:12 compute-0 groupadd[37306]: group added to /etc/group: name=qemu, GID=107
Jan 27 08:21:12 compute-0 groupadd[37306]: group added to /etc/gshadow: name=qemu
Jan 27 08:21:12 compute-0 groupadd[37306]: new group: name=qemu, GID=107
Jan 27 08:21:12 compute-0 sudo[37303]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:13 compute-0 sudo[37461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbsoqeqwwrpbgnqvzykvzidqatdygyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502073.391748-867-112874146433080/AnsiballZ_user.py'
Jan 27 08:21:13 compute-0 sudo[37461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:14 compute-0 python3.9[37463]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:21:14 compute-0 useradd[37465]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 08:21:14 compute-0 sudo[37461]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:14 compute-0 sudo[37621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttkifcrxfwslpauddbnzqpngsitfwqad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502074.646972-891-231234484134497/AnsiballZ_getent.py'
Jan 27 08:21:14 compute-0 sudo[37621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:15 compute-0 python3.9[37623]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 08:21:15 compute-0 sudo[37621]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:15 compute-0 sudo[37774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtggybslvvlwnuttlguclahwkpigykt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502075.420516-915-54514750161123/AnsiballZ_group.py'
Jan 27 08:21:15 compute-0 sudo[37774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:15 compute-0 python3.9[37776]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:21:15 compute-0 groupadd[37777]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 27 08:21:15 compute-0 groupadd[37777]: group added to /etc/gshadow: name=hugetlbfs
Jan 27 08:21:15 compute-0 groupadd[37777]: new group: name=hugetlbfs, GID=42477
Jan 27 08:21:15 compute-0 sudo[37774]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:16 compute-0 sudo[37932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwcrreihbareelqycbyarcqerolglgyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502076.7256672-942-26122987169164/AnsiballZ_file.py'
Jan 27 08:21:16 compute-0 sudo[37932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:17 compute-0 python3.9[37934]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 08:21:17 compute-0 sudo[37932]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:18 compute-0 sudo[38084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvevkqdlorbdczmfljchcgkcagsblkjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502077.7759428-975-59993826456030/AnsiballZ_dnf.py'
Jan 27 08:21:18 compute-0 sudo[38084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:18 compute-0 python3.9[38086]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:21:20 compute-0 sudo[38084]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:20 compute-0 sudo[38237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchfqnrogzftketllovdglghrntzgtma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502080.6716805-999-31412562929337/AnsiballZ_file.py'
Jan 27 08:21:20 compute-0 sudo[38237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:21 compute-0 python3.9[38239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:21 compute-0 sudo[38237]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:21 compute-0 sudo[38389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eipdbdllfabzdgepyimoeuprnbaeldwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502081.5486984-1023-47170912344721/AnsiballZ_stat.py'
Jan 27 08:21:21 compute-0 sudo[38389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:21 compute-0 python3.9[38391]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:22 compute-0 sudo[38389]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:22 compute-0 sudo[38512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbeebmrhufazmzcykrxtasswieiwdspz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502081.5486984-1023-47170912344721/AnsiballZ_copy.py'
Jan 27 08:21:22 compute-0 sudo[38512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:22 compute-0 python3.9[38514]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769502081.5486984-1023-47170912344721/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:22 compute-0 sudo[38512]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:23 compute-0 sudo[38664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqbqtmeaccztcbbceymoaebasntchwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502082.9698598-1068-38212486896747/AnsiballZ_systemd.py'
Jan 27 08:21:23 compute-0 sudo[38664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:23 compute-0 python3.9[38666]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:21:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 08:21:23 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 27 08:21:23 compute-0 kernel: Bridge firewalling registered
Jan 27 08:21:23 compute-0 systemd-modules-load[38670]: Inserted module 'br_netfilter'
Jan 27 08:21:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 08:21:23 compute-0 sudo[38664]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:24 compute-0 sudo[38824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnbqgixktkehwnajjvziafrtycgwvtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502084.1601546-1092-30512917893790/AnsiballZ_stat.py'
Jan 27 08:21:24 compute-0 sudo[38824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:24 compute-0 python3.9[38826]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:24 compute-0 sudo[38824]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:24 compute-0 sudo[38947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soekcmpkwygndvnxumoogbuxpbfazdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502084.1601546-1092-30512917893790/AnsiballZ_copy.py'
Jan 27 08:21:24 compute-0 sudo[38947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:25 compute-0 python3.9[38949]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769502084.1601546-1092-30512917893790/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:25 compute-0 sudo[38947]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:25 compute-0 sudo[39099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanaptpxdhmlstufpderlbzbaqkmqisc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502085.7459404-1146-17832059087787/AnsiballZ_dnf.py'
Jan 27 08:21:25 compute-0 sudo[39099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:26 compute-0 python3.9[39101]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:21:29 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:21:29 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:21:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:21:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:21:29 compute-0 systemd[1]: Reloading.
Jan 27 08:21:30 compute-0 systemd-rc-local-generator[39164]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:21:30 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:21:30 compute-0 sudo[39099]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:32 compute-0 irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 27 08:21:32 compute-0 irqbalance[793]: IRQ 26 affinity is now unmanaged
Jan 27 08:21:32 compute-0 python3.9[42192]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:21:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:21:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:21:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.286s CPU time.
Jan 27 08:21:33 compute-0 systemd[1]: run-rfd2ef76c8c2d42d8a68abe31f3ed5ba0.service: Deactivated successfully.
Jan 27 08:21:33 compute-0 python3.9[42966]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 08:21:34 compute-0 python3.9[43116]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:21:34 compute-0 sudo[43266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dytfwtnrfkywpmusubpyezznnmveyeua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502094.722107-1263-124804373831416/AnsiballZ_command.py'
Jan 27 08:21:34 compute-0 sudo[43266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:35 compute-0 python3.9[43268]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:35 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 08:21:35 compute-0 systemd[1]: Starting Authorization Manager...
Jan 27 08:21:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 08:21:35 compute-0 polkitd[43485]: Started polkitd version 0.117
Jan 27 08:21:35 compute-0 polkitd[43485]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 08:21:35 compute-0 polkitd[43485]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 08:21:35 compute-0 polkitd[43485]: Finished loading, compiling and executing 2 rules
Jan 27 08:21:35 compute-0 polkitd[43485]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 27 08:21:35 compute-0 systemd[1]: Started Authorization Manager.
Jan 27 08:21:35 compute-0 sudo[43266]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:36 compute-0 sudo[43653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnnegewycqkqqutbrxvphkefvbeziakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502096.254929-1290-184097807684791/AnsiballZ_systemd.py'
Jan 27 08:21:36 compute-0 sudo[43653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:36 compute-0 python3.9[43655]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:21:36 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 08:21:36 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 08:21:36 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 08:21:36 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 08:21:37 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 08:21:37 compute-0 sudo[43653]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:37 compute-0 python3.9[43817]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 08:21:41 compute-0 sudo[43967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shejhnofygmzjrnpqxugxzotyjzmxala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502101.1090915-1461-267109750360714/AnsiballZ_systemd.py'
Jan 27 08:21:41 compute-0 sudo[43967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:41 compute-0 python3.9[43969]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:21:41 compute-0 systemd[1]: Reloading.
Jan 27 08:21:41 compute-0 systemd-rc-local-generator[43998]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:21:41 compute-0 sudo[43967]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:42 compute-0 sudo[44155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxyhbbnryvfjjnnobhunbqdqcihpoeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502102.0341194-1461-106084026550260/AnsiballZ_systemd.py'
Jan 27 08:21:42 compute-0 sudo[44155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:42 compute-0 python3.9[44157]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:21:42 compute-0 systemd[1]: Reloading.
Jan 27 08:21:42 compute-0 systemd-rc-local-generator[44187]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:21:42 compute-0 sudo[44155]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:43 compute-0 sudo[44343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledqcmbqymvlbjrrndqqilucjbxttgql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502103.4500577-1509-83497463763708/AnsiballZ_command.py'
Jan 27 08:21:43 compute-0 sudo[44343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:43 compute-0 python3.9[44345]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:43 compute-0 sudo[44343]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:44 compute-0 sudo[44496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzpxblasqdsqygjyzfooewonxpvpziab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502104.282057-1533-237624367708544/AnsiballZ_command.py'
Jan 27 08:21:44 compute-0 sudo[44496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:44 compute-0 python3.9[44498]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:44 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 27 08:21:44 compute-0 sudo[44496]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:45 compute-0 sudo[44649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neeydurhtscymktxjclmudbhlqnxcysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502105.113657-1557-237658641450157/AnsiballZ_command.py'
Jan 27 08:21:45 compute-0 sudo[44649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:45 compute-0 python3.9[44651]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:46 compute-0 sudo[44649]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:47 compute-0 sudo[44811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsgbcklpohebmpgrkqabdvqsnwpkoops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502107.4127517-1581-14660367588806/AnsiballZ_command.py'
Jan 27 08:21:47 compute-0 sudo[44811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:47 compute-0 python3.9[44813]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:48 compute-0 sudo[44811]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:49 compute-0 sudo[44964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofgykzhvsjndkruldmvlwzqhnxkkrpsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502109.0554473-1605-140204174899461/AnsiballZ_systemd.py'
Jan 27 08:21:49 compute-0 sudo[44964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:49 compute-0 python3.9[44966]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:21:49 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 08:21:49 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 27 08:21:49 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 27 08:21:49 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 27 08:21:49 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 08:21:49 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 27 08:21:49 compute-0 sudo[44964]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:50 compute-0 sshd-session[31341]: Connection closed by 192.168.122.30 port 59964
Jan 27 08:21:50 compute-0 sshd-session[31338]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:21:50 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 27 08:21:50 compute-0 systemd[1]: session-9.scope: Consumed 2min 14.141s CPU time.
Jan 27 08:21:50 compute-0 systemd-logind[799]: Session 9 logged out. Waiting for processes to exit.
Jan 27 08:21:50 compute-0 systemd-logind[799]: Removed session 9.
Jan 27 08:21:55 compute-0 sshd-session[44996]: Accepted publickey for zuul from 192.168.122.30 port 48440 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:21:55 compute-0 systemd-logind[799]: New session 10 of user zuul.
Jan 27 08:21:55 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 27 08:21:55 compute-0 sshd-session[44996]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:21:56 compute-0 python3.9[45149]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:21:57 compute-0 sudo[45303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azcqbmdsvsdzwwblrthxridcxsiphcfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502117.1857698-68-261116474625772/AnsiballZ_getent.py'
Jan 27 08:21:57 compute-0 sudo[45303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:57 compute-0 python3.9[45305]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 08:21:57 compute-0 sudo[45303]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:58 compute-0 sudo[45456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvgbhvsyvpbdfdbzbztakeuokhnarqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502118.0824573-92-119377946864688/AnsiballZ_group.py'
Jan 27 08:21:58 compute-0 sudo[45456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:58 compute-0 python3.9[45459]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:21:58 compute-0 groupadd[45460]: group added to /etc/group: name=openvswitch, GID=42476
Jan 27 08:21:58 compute-0 groupadd[45460]: group added to /etc/gshadow: name=openvswitch
Jan 27 08:21:58 compute-0 groupadd[45460]: new group: name=openvswitch, GID=42476
Jan 27 08:21:58 compute-0 sudo[45456]: pam_unix(sudo:session): session closed for user root
Jan 27 08:21:59 compute-0 sudo[45615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczpdckbzchgkqvsglsifniyyqafyain ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502119.0368166-116-15304775449641/AnsiballZ_user.py'
Jan 27 08:21:59 compute-0 sudo[45615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:21:59 compute-0 python3.9[45617]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:21:59 compute-0 useradd[45619]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 08:21:59 compute-0 useradd[45619]: add 'openvswitch' to group 'hugetlbfs'
Jan 27 08:21:59 compute-0 useradd[45619]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 27 08:21:59 compute-0 sudo[45615]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:06 compute-0 sudo[45775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruyzioswdzkfqdamexzogkfinttadafw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502126.2701337-146-257505589179189/AnsiballZ_setup.py'
Jan 27 08:22:06 compute-0 sudo[45775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:06 compute-0 python3.9[45777]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:22:07 compute-0 sudo[45775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:07 compute-0 sudo[45859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhbiwmkxfwsuauwvumrwbqgylntlwcvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502126.2701337-146-257505589179189/AnsiballZ_dnf.py'
Jan 27 08:22:07 compute-0 sudo[45859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:07 compute-0 python3.9[45861]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:22:09 compute-0 sudo[45859]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:10 compute-0 sudo[46023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odnispdqelkzedpascsunoayesyfptoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502130.345484-188-246095142075430/AnsiballZ_dnf.py'
Jan 27 08:22:10 compute-0 sudo[46023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:10 compute-0 python3.9[46025]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:22:21 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:22:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:22:21 compute-0 groupadd[46048]: group added to /etc/group: name=unbound, GID=994
Jan 27 08:22:21 compute-0 groupadd[46048]: group added to /etc/gshadow: name=unbound
Jan 27 08:22:21 compute-0 groupadd[46048]: new group: name=unbound, GID=994
Jan 27 08:22:21 compute-0 useradd[46055]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 27 08:22:22 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 27 08:22:22 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 27 08:22:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:22:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:22:23 compute-0 systemd[1]: Reloading.
Jan 27 08:22:23 compute-0 systemd-rc-local-generator[46554]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:23 compute-0 systemd-sysv-generator[46557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:22:23 compute-0 sudo[46023]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:22:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:22:24 compute-0 systemd[1]: run-r2c56fc0160dc494ca023e1982df543ac.service: Deactivated successfully.
Jan 27 08:22:25 compute-0 sudo[47121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tecjdopznuzijfiadlaihcyplhbuheyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502144.7790844-212-236686998747758/AnsiballZ_systemd.py'
Jan 27 08:22:25 compute-0 sudo[47121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:25 compute-0 python3.9[47123]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:22:25 compute-0 systemd[1]: Reloading.
Jan 27 08:22:25 compute-0 systemd-rc-local-generator[47153]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:25 compute-0 systemd-sysv-generator[47156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:26 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 27 08:22:26 compute-0 chown[47165]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 27 08:22:26 compute-0 ovs-ctl[47170]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 27 08:22:26 compute-0 ovs-ctl[47170]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 27 08:22:26 compute-0 ovs-ctl[47170]: Starting ovsdb-server [  OK  ]
Jan 27 08:22:26 compute-0 ovs-vsctl[47219]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 27 08:22:26 compute-0 ovs-vsctl[47239]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"fd496359-7f94-4196-96c9-9e7fb7c843a0\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 27 08:22:26 compute-0 ovs-ctl[47170]: Configuring Open vSwitch system IDs [  OK  ]
Jan 27 08:22:26 compute-0 ovs-vsctl[47245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 08:22:26 compute-0 ovs-ctl[47170]: Enabling remote OVSDB managers [  OK  ]
Jan 27 08:22:26 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 27 08:22:26 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 27 08:22:26 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 27 08:22:26 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 27 08:22:26 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 27 08:22:26 compute-0 ovs-ctl[47290]: Inserting openvswitch module [  OK  ]
Jan 27 08:22:26 compute-0 ovs-ctl[47259]: Starting ovs-vswitchd [  OK  ]
Jan 27 08:22:26 compute-0 ovs-vsctl[47307]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 08:22:26 compute-0 ovs-ctl[47259]: Enabling remote OVSDB managers [  OK  ]
Jan 27 08:22:26 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 27 08:22:26 compute-0 systemd[1]: Starting Open vSwitch...
Jan 27 08:22:26 compute-0 systemd[1]: Finished Open vSwitch.
Jan 27 08:22:26 compute-0 sudo[47121]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:27 compute-0 python3.9[47459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:22:28 compute-0 sudo[47609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbywfebijdlyuevdfjdottfeaamyzsbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502148.1768034-266-38291296145812/AnsiballZ_sefcontext.py'
Jan 27 08:22:28 compute-0 sudo[47609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:28 compute-0 python3.9[47611]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 08:22:30 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:22:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:22:30 compute-0 sudo[47609]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:31 compute-0 python3.9[47766]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:22:31 compute-0 sudo[47922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dygvrmxulovzgckouwbpqcpzhulpxmkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502151.7362847-320-150360365384312/AnsiballZ_dnf.py'
Jan 27 08:22:31 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 27 08:22:31 compute-0 sudo[47922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:32 compute-0 python3.9[47924]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:22:33 compute-0 sudo[47922]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:34 compute-0 sudo[48075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkjyqaemxqwfqnsvbfcualmxxbqntxvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502154.3961303-344-244208593861895/AnsiballZ_command.py'
Jan 27 08:22:34 compute-0 sudo[48075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:35 compute-0 python3.9[48077]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:35 compute-0 sudo[48075]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:36 compute-0 sudo[48362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlohvsqgzkzpekizezwyiujupgjnqqfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502156.0820198-368-56340178477003/AnsiballZ_file.py'
Jan 27 08:22:36 compute-0 sudo[48362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:36 compute-0 python3.9[48364]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 08:22:36 compute-0 sudo[48362]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:37 compute-0 python3.9[48514]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:22:37 compute-0 sudo[48666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaupdpdagghjkseorenrwklteqaffzeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502157.6765563-416-180909285166678/AnsiballZ_dnf.py'
Jan 27 08:22:37 compute-0 sudo[48666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:38 compute-0 python3.9[48668]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:22:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:22:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:22:39 compute-0 systemd[1]: Reloading.
Jan 27 08:22:39 compute-0 systemd-sysv-generator[48710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:39 compute-0 systemd-rc-local-generator[48707]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:22:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:22:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:22:40 compute-0 systemd[1]: run-r40862cb5ee7e4648b41e419be60e50d7.service: Deactivated successfully.
Jan 27 08:22:40 compute-0 sudo[48666]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:41 compute-0 sudo[48982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztoztnnxrdrmakrcknawzykgyxeuzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502160.8916807-440-175791794770442/AnsiballZ_systemd.py'
Jan 27 08:22:41 compute-0 sudo[48982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:41 compute-0 python3.9[48984]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:22:41 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 08:22:41 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 27 08:22:41 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 27 08:22:41 compute-0 systemd[1]: Stopping Network Manager...
Jan 27 08:22:41 compute-0 NetworkManager[7198]: <info>  [1769502161.5470] caught SIGTERM, shutting down normally.
Jan 27 08:22:41 compute-0 NetworkManager[7198]: <info>  [1769502161.5485] dhcp4 (eth0): canceled DHCP transaction
Jan 27 08:22:41 compute-0 NetworkManager[7198]: <info>  [1769502161.5485] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 08:22:41 compute-0 NetworkManager[7198]: <info>  [1769502161.5485] dhcp4 (eth0): state changed no lease
Jan 27 08:22:41 compute-0 NetworkManager[7198]: <info>  [1769502161.5488] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 08:22:41 compute-0 NetworkManager[7198]: <info>  [1769502161.5558] exiting (success)
Jan 27 08:22:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 08:22:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 08:22:41 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 08:22:41 compute-0 systemd[1]: Stopped Network Manager.
Jan 27 08:22:41 compute-0 systemd[1]: NetworkManager.service: Consumed 15.912s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Jan 27 08:22:41 compute-0 systemd[1]: Starting Network Manager...
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.6281] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:f8a94f5b-78c7-40b7-8763-152a695f2532)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.6283] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.6332] manager[0x5578a94e7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 08:22:41 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 08:22:41 compute-0 systemd[1]: Started Hostname Service.
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.6992] hostname: hostname: using hostnamed
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.6994] hostname: static hostname changed from (none) to "compute-0"
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7002] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7009] manager[0x5578a94e7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7010] manager[0x5578a94e7000]: rfkill: WWAN hardware radio set enabled
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7046] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7062] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7063] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7064] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7066] manager: Networking is enabled by state file
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7070] settings: Loaded settings plugin: keyfile (internal)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7076] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7124] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7142] dhcp: init: Using DHCP client 'internal'
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7148] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7160] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7170] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7184] device (lo): Activation: starting connection 'lo' (f9f7e4cf-a182-47b9-990d-3db4b4bd0790)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7196] device (eth0): carrier: link connected
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7203] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7211] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7211] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7222] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7235] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7246] device (eth1): carrier: link connected
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7255] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7265] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (106c46df-b45f-5088-8dfe-552add023723) (indicated)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7266] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7282] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7293] device (eth1): Activation: starting connection 'ci-private-network' (106c46df-b45f-5088-8dfe-552add023723)
Jan 27 08:22:41 compute-0 systemd[1]: Started Network Manager.
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7302] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7314] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7318] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7322] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7326] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7332] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7338] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7343] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7350] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7366] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7372] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7390] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7405] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7414] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7417] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7426] device (lo): Activation: successful, device activated.
Jan 27 08:22:41 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7436] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7444] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7526] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7545] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7556] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7561] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7564] device (eth1): Activation: successful, device activated.
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7588] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7593] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7608] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7616] device (eth0): Activation: successful, device activated.
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7625] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 08:22:41 compute-0 sudo[48982]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:41 compute-0 NetworkManager[48994]: <info>  [1769502161.7631] manager: startup complete
Jan 27 08:22:41 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 27 08:22:42 compute-0 sudo[49208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrotwtwudhrxvphjwggnijffguzanank ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502162.2275152-464-259611817483214/AnsiballZ_dnf.py'
Jan 27 08:22:42 compute-0 sudo[49208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:42 compute-0 python3.9[49210]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:22:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:22:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:22:47 compute-0 systemd[1]: Reloading.
Jan 27 08:22:47 compute-0 systemd-sysv-generator[49268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:47 compute-0 systemd-rc-local-generator[49265]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:22:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:22:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:22:47 compute-0 systemd[1]: run-r3ab53e3de7824f3cb062a83d14eff5d0.service: Deactivated successfully.
Jan 27 08:22:48 compute-0 sudo[49208]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:51 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 08:22:54 compute-0 sudo[49669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwgetpifwqezehzyyrfykrguidvdvavz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502174.4343767-500-149221200413770/AnsiballZ_stat.py'
Jan 27 08:22:54 compute-0 sudo[49669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:54 compute-0 python3.9[49671]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:22:54 compute-0 sudo[49669]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:55 compute-0 sudo[49821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrtlrixvcmsjkoljjkvidhayqaiijvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502175.1972609-527-77173877678912/AnsiballZ_ini_file.py'
Jan 27 08:22:55 compute-0 sudo[49821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:55 compute-0 python3.9[49823]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:55 compute-0 sudo[49821]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:56 compute-0 sudo[49975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcthphnigpvujcudbyxafomegpsdfemv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502176.170194-557-237890109614423/AnsiballZ_ini_file.py'
Jan 27 08:22:56 compute-0 sudo[49975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:56 compute-0 python3.9[49977]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:56 compute-0 sudo[49975]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:57 compute-0 sudo[50127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrymogazlkgzaubbkbkvpcscfalrtcla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502176.8569567-557-128610245680667/AnsiballZ_ini_file.py'
Jan 27 08:22:57 compute-0 sudo[50127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:57 compute-0 python3.9[50129]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:57 compute-0 sudo[50127]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:58 compute-0 sudo[50279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdgpfyvvuajbrteotfvqccsmyfhzkmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502177.7287889-602-237762279976869/AnsiballZ_ini_file.py'
Jan 27 08:22:58 compute-0 sudo[50279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:58 compute-0 python3.9[50281]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:58 compute-0 sudo[50279]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:58 compute-0 sudo[50431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjgalbcmgnittzjnywusbtlxcladgne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502178.3460622-602-121831205883682/AnsiballZ_ini_file.py'
Jan 27 08:22:58 compute-0 sudo[50431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:58 compute-0 python3.9[50433]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:58 compute-0 sudo[50431]: pam_unix(sudo:session): session closed for user root
Jan 27 08:22:59 compute-0 sudo[50583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyojnbneyzqxasbcqcmybbqxkcxqzfbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502179.1996157-647-73675906111802/AnsiballZ_stat.py'
Jan 27 08:22:59 compute-0 sudo[50583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:22:59 compute-0 python3.9[50585]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:59 compute-0 sudo[50583]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:00 compute-0 sudo[50706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wicouifrdpuxxnaycrwooszzvlouagqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502179.1996157-647-73675906111802/AnsiballZ_copy.py'
Jan 27 08:23:00 compute-0 sudo[50706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:00 compute-0 python3.9[50708]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502179.1996157-647-73675906111802/.source _original_basename=.mboz99vv follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:00 compute-0 sudo[50706]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:01 compute-0 sudo[50858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzgvvmlgxpgtlksxualyajvcmwrtecs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502180.8006039-692-91376131026521/AnsiballZ_file.py'
Jan 27 08:23:01 compute-0 sudo[50858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:01 compute-0 python3.9[50860]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:01 compute-0 sudo[50858]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:02 compute-0 sudo[51010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdakcbrszkixgcydaiapkaybgzcvtiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502181.589101-716-180157118907870/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 27 08:23:02 compute-0 sudo[51010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:02 compute-0 python3.9[51012]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 27 08:23:02 compute-0 sudo[51010]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:02 compute-0 sudo[51162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkmkqwzkjkboclatewghmgvamxbzqnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502182.619368-743-86089833600241/AnsiballZ_file.py'
Jan 27 08:23:02 compute-0 sudo[51162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:03 compute-0 python3.9[51164]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:03 compute-0 sudo[51162]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:03 compute-0 sudo[51314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lciluyalenidzcvvlhuyolzqjlohqggn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502183.5869312-773-183356426154580/AnsiballZ_stat.py'
Jan 27 08:23:03 compute-0 sudo[51314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:04 compute-0 sudo[51314]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:04 compute-0 sudo[51437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqalkacrxxzwxuwdtebrxboqzmkdignd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502183.5869312-773-183356426154580/AnsiballZ_copy.py'
Jan 27 08:23:04 compute-0 sudo[51437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:04 compute-0 sudo[51437]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:05 compute-0 sudo[51589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpmtwateltlmsoaqheeoacttaeaooyid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502185.3080945-818-259950481762055/AnsiballZ_slurp.py'
Jan 27 08:23:05 compute-0 sudo[51589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:05 compute-0 python3.9[51591]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 27 08:23:05 compute-0 sudo[51589]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:07 compute-0 sudo[51764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcajkrzvgvzqgujjavctnxcxeyatltub ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502186.3242035-845-14092898510438/async_wrapper.py j707455061372 300 /home/zuul/.ansible/tmp/ansible-tmp-1769502186.3242035-845-14092898510438/AnsiballZ_edpm_os_net_config.py _'
Jan 27 08:23:07 compute-0 sudo[51764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:07 compute-0 ansible-async_wrapper.py[51766]: Invoked with j707455061372 300 /home/zuul/.ansible/tmp/ansible-tmp-1769502186.3242035-845-14092898510438/AnsiballZ_edpm_os_net_config.py _
Jan 27 08:23:07 compute-0 ansible-async_wrapper.py[51769]: Starting module and watcher
Jan 27 08:23:07 compute-0 ansible-async_wrapper.py[51769]: Start watching 51770 (300)
Jan 27 08:23:07 compute-0 ansible-async_wrapper.py[51770]: Start module (51770)
Jan 27 08:23:07 compute-0 ansible-async_wrapper.py[51766]: Return async_wrapper task started.
Jan 27 08:23:07 compute-0 sudo[51764]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:07 compute-0 python3.9[51771]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 27 08:23:08 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 27 08:23:08 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 27 08:23:08 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 27 08:23:08 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 27 08:23:08 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.1439] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.1456] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2116] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2117] audit: op="connection-add" uuid="603f7914-3857-44bc-9b0b-e2055e666da9" name="br-ex-br" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2130] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2131] audit: op="connection-add" uuid="20e5b85b-d1db-4f4d-907b-b0b11004f305" name="br-ex-port" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2141] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2142] audit: op="connection-add" uuid="a15af4d3-767e-4643-9545-6fbc7d741765" name="eth1-port" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2152] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2153] audit: op="connection-add" uuid="6f782c22-86d6-4b3d-83c2-8ff2480897b6" name="vlan20-port" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2162] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2163] audit: op="connection-add" uuid="42cc4849-ca4f-4ce9-8aad-e97bd2e47b02" name="vlan21-port" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2172] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2173] audit: op="connection-add" uuid="f4f3577a-2bd5-4a48-9f93-3e8acdfdf4d6" name="vlan22-port" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2182] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2183] audit: op="connection-add" uuid="799038b9-b8d2-4e56-90d6-48703fa780dd" name="vlan23-port" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2201] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2216] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2217] audit: op="connection-add" uuid="48e10846-7191-4a78-b3cf-917c9d11cc28" name="br-ex-if" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2252] audit: op="connection-update" uuid="106c46df-b45f-5088-8dfe-552add023723" name="ci-private-network" args="ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.addresses,ipv6.routing-rules,ovs-interface.type,connection.controller,connection.slave-type,connection.timestamp,connection.port-type,connection.master,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.addresses,ipv4.routing-rules,ovs-external-ids.data" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2268] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2269] audit: op="connection-add" uuid="8b6494a8-1941-4096-905e-0d87f196178f" name="vlan20-if" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2287] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2288] audit: op="connection-add" uuid="220b32af-1f35-4a22-a931-e9bfa21698e3" name="vlan21-if" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2303] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2305] audit: op="connection-add" uuid="c4fe877c-0652-4c99-bebf-cf00a3b84185" name="vlan22-if" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2320] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2321] audit: op="connection-add" uuid="55e01a92-c2a5-4000-a645-7b3cf2ce1e69" name="vlan23-if" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2332] audit: op="connection-delete" uuid="5dcced6c-1ee6-334d-9b36-b61314403afd" name="Wired connection 1" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2345] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2348] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2354] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2358] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (603f7914-3857-44bc-9b0b-e2055e666da9)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2359] audit: op="connection-activate" uuid="603f7914-3857-44bc-9b0b-e2055e666da9" name="br-ex-br" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2360] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2361] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2366] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2370] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (20e5b85b-d1db-4f4d-907b-b0b11004f305)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2372] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2373] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2376] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2380] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a15af4d3-767e-4643-9545-6fbc7d741765)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2382] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2383] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2388] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2392] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (6f782c22-86d6-4b3d-83c2-8ff2480897b6)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2393] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2394] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2399] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2403] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (42cc4849-ca4f-4ce9-8aad-e97bd2e47b02)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2404] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2405] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2410] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2414] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f4f3577a-2bd5-4a48-9f93-3e8acdfdf4d6)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2415] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2416] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2421] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2425] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (799038b9-b8d2-4e56-90d6-48703fa780dd)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2425] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2428] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2430] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2435] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2436] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2439] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2443] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (48e10846-7191-4a78-b3cf-917c9d11cc28)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2443] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2446] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2448] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2449] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2450] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2459] device (eth1): disconnecting for new activation request.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2460] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2463] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2465] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2466] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2469] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2470] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2472] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2476] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8b6494a8-1941-4096-905e-0d87f196178f)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2477] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2479] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2481] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2482] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2485] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2486] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2489] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2493] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (220b32af-1f35-4a22-a931-e9bfa21698e3)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2493] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2496] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2498] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2499] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2502] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2503] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2506] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2510] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c4fe877c-0652-4c99-bebf-cf00a3b84185)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2511] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2513] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2515] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2516] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2519] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <warn>  [1769502189.2520] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2523] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2527] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (55e01a92-c2a5-4000-a645-7b3cf2ce1e69)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2528] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2531] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2532] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2534] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2535] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2546] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2547] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2551] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2552] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2558] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2561] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2565] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2568] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2570] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2574] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2578] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2581] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2583] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2587] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2591] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2594] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2596] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2600] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2613] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2619] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2621] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 systemd-udevd[51777]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:23:09 compute-0 kernel: Timeout policy base is empty
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2627] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2632] dhcp4 (eth0): canceled DHCP transaction
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2633] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2633] dhcp4 (eth0): state changed no lease
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2634] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2645] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2649] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51772 uid=0 result="fail" reason="Device is not activated"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2700] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2708] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2715] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 27 08:23:09 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2768] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2780] device (eth1): disconnecting for new activation request.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2781] audit: op="connection-activate" uuid="106c46df-b45f-5088-8dfe-552add023723" name="ci-private-network" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2782] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2889] device (eth1): Activation: starting connection 'ci-private-network' (106c46df-b45f-5088-8dfe-552add023723)
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2894] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2897] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2925] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2930] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2937] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2942] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2948] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2949] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2951] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2952] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2953] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2955] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2956] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2960] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2967] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2971] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2975] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2979] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2985] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2989] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2993] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.2997] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3001] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3004] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3007] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3013] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3017] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3021] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3059] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3061] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3070] device (eth1): Activation: successful, device activated.
Jan 27 08:23:09 compute-0 kernel: br-ex: entered promiscuous mode
Jan 27 08:23:09 compute-0 kernel: vlan22: entered promiscuous mode
Jan 27 08:23:09 compute-0 systemd-udevd[51778]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:23:09 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3338] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3349] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3365] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3367] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3370] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3426] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3436] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3453] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3454] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3458] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 kernel: vlan20: entered promiscuous mode
Jan 27 08:23:09 compute-0 kernel: vlan21: entered promiscuous mode
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3572] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:23:09 compute-0 kernel: vlan23: entered promiscuous mode
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3592] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3623] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3625] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3631] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3645] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3655] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3687] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3688] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3692] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3746] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3759] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3775] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3776] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:23:09 compute-0 NetworkManager[48994]: <info>  [1769502189.3780] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:23:10 compute-0 NetworkManager[48994]: <info>  [1769502190.4784] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Jan 27 08:23:10 compute-0 NetworkManager[48994]: <info>  [1769502190.6533] checkpoint[0x5578a94bc950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 27 08:23:10 compute-0 NetworkManager[48994]: <info>  [1769502190.6536] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Jan 27 08:23:10 compute-0 sudo[52128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrhhmkvgpsjxvjtfljqygwaheumcusw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502190.3688517-845-88506843393707/AnsiballZ_async_status.py'
Jan 27 08:23:10 compute-0 sudo[52128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:10 compute-0 python3.9[52130]: ansible-ansible.legacy.async_status Invoked with jid=j707455061372.51766 mode=status _async_dir=/root/.ansible_async
Jan 27 08:23:10 compute-0 sudo[52128]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:10 compute-0 NetworkManager[48994]: <info>  [1769502190.9741] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Jan 27 08:23:10 compute-0 NetworkManager[48994]: <info>  [1769502190.9752] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Jan 27 08:23:11 compute-0 NetworkManager[48994]: <info>  [1769502191.1959] audit: op="networking-control" arg="global-dns-configuration" pid=51772 uid=0 result="success"
Jan 27 08:23:11 compute-0 NetworkManager[48994]: <info>  [1769502191.1991] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 27 08:23:11 compute-0 NetworkManager[48994]: <info>  [1769502191.2020] audit: op="networking-control" arg="global-dns-configuration" pid=51772 uid=0 result="success"
Jan 27 08:23:11 compute-0 NetworkManager[48994]: <info>  [1769502191.2042] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Jan 27 08:23:11 compute-0 NetworkManager[48994]: <info>  [1769502191.3740] checkpoint[0x5578a94bca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 27 08:23:11 compute-0 NetworkManager[48994]: <info>  [1769502191.3749] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Jan 27 08:23:11 compute-0 ansible-async_wrapper.py[51770]: Module complete (51770)
Jan 27 08:23:11 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 08:23:12 compute-0 ansible-async_wrapper.py[51769]: Done in kid B.
Jan 27 08:23:14 compute-0 sudo[52235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etsgqobsiimfqaxzocfvyszztymornkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502190.3688517-845-88506843393707/AnsiballZ_async_status.py'
Jan 27 08:23:14 compute-0 sudo[52235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:14 compute-0 python3.9[52237]: ansible-ansible.legacy.async_status Invoked with jid=j707455061372.51766 mode=status _async_dir=/root/.ansible_async
Jan 27 08:23:14 compute-0 sudo[52235]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:14 compute-0 sudo[52335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cspcoojgynokfyjrpmnbssndrlylhbga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502190.3688517-845-88506843393707/AnsiballZ_async_status.py'
Jan 27 08:23:14 compute-0 sudo[52335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:14 compute-0 python3.9[52337]: ansible-ansible.legacy.async_status Invoked with jid=j707455061372.51766 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 08:23:14 compute-0 sudo[52335]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:15 compute-0 sudo[52487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkeghftohyrgermqximeyzkzugxyzony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502195.2472632-926-267290237627569/AnsiballZ_stat.py'
Jan 27 08:23:15 compute-0 sudo[52487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:15 compute-0 python3.9[52489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:15 compute-0 sudo[52487]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:16 compute-0 sudo[52610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvnuopmsrlkrdfxfuiezahaldsckgjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502195.2472632-926-267290237627569/AnsiballZ_copy.py'
Jan 27 08:23:16 compute-0 sudo[52610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:16 compute-0 python3.9[52612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502195.2472632-926-267290237627569/.source.returncode _original_basename=.ez2udneq follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:16 compute-0 sudo[52610]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:16 compute-0 sudo[52762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdrbglaojducfrrsmrsuhdnrepnhybdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502196.6729422-974-122405960616748/AnsiballZ_stat.py'
Jan 27 08:23:16 compute-0 sudo[52762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:17 compute-0 python3.9[52764]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:17 compute-0 sudo[52762]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:17 compute-0 sudo[52885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eollryjuvuhjcfdezbnksjidkjnxbias ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502196.6729422-974-122405960616748/AnsiballZ_copy.py'
Jan 27 08:23:17 compute-0 sudo[52885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:17 compute-0 python3.9[52887]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502196.6729422-974-122405960616748/.source.cfg _original_basename=.lvk58rlt follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:17 compute-0 sudo[52885]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:18 compute-0 sudo[53038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoddmoegnfqfsfbrvtcaqllekyxdnwfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502197.9651704-1019-40706201042995/AnsiballZ_systemd.py'
Jan 27 08:23:18 compute-0 sudo[53038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:18 compute-0 python3.9[53040]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:23:18 compute-0 systemd[1]: Reloading Network Manager...
Jan 27 08:23:18 compute-0 NetworkManager[48994]: <info>  [1769502198.6542] audit: op="reload" arg="0" pid=53044 uid=0 result="success"
Jan 27 08:23:18 compute-0 NetworkManager[48994]: <info>  [1769502198.6546] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 27 08:23:18 compute-0 systemd[1]: Reloaded Network Manager.
Jan 27 08:23:18 compute-0 sudo[53038]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:20 compute-0 sshd-session[44999]: Connection closed by 192.168.122.30 port 48440
Jan 27 08:23:20 compute-0 sshd-session[44996]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:23:20 compute-0 systemd-logind[799]: Session 10 logged out. Waiting for processes to exit.
Jan 27 08:23:20 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 27 08:23:20 compute-0 systemd[1]: session-10.scope: Consumed 48.257s CPU time.
Jan 27 08:23:20 compute-0 systemd-logind[799]: Removed session 10.
Jan 27 08:23:25 compute-0 sshd-session[53075]: Accepted publickey for zuul from 192.168.122.30 port 39300 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:23:25 compute-0 systemd-logind[799]: New session 11 of user zuul.
Jan 27 08:23:25 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 27 08:23:25 compute-0 sshd-session[53075]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:23:26 compute-0 python3.9[53228]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:23:27 compute-0 python3.9[53383]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:23:28 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 08:23:29 compute-0 python3.9[53577]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:23:29 compute-0 sshd-session[53078]: Connection closed by 192.168.122.30 port 39300
Jan 27 08:23:29 compute-0 sshd-session[53075]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:23:29 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 27 08:23:29 compute-0 systemd[1]: session-11.scope: Consumed 2.512s CPU time.
Jan 27 08:23:29 compute-0 systemd-logind[799]: Session 11 logged out. Waiting for processes to exit.
Jan 27 08:23:29 compute-0 systemd-logind[799]: Removed session 11.
Jan 27 08:23:34 compute-0 sshd-session[53605]: Accepted publickey for zuul from 192.168.122.30 port 56592 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:23:34 compute-0 systemd-logind[799]: New session 12 of user zuul.
Jan 27 08:23:34 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 27 08:23:34 compute-0 sshd-session[53605]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:23:35 compute-0 python3.9[53758]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:23:36 compute-0 python3.9[53912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:23:37 compute-0 sudo[54067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uljrhunnyfccieyzdovopmlohuroinud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502217.010982-80-6134751126058/AnsiballZ_setup.py'
Jan 27 08:23:37 compute-0 sudo[54067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:37 compute-0 python3.9[54069]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:23:37 compute-0 sudo[54067]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:38 compute-0 sudo[54151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfyeolkyxexoheyzhallphlqeadxfuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502217.010982-80-6134751126058/AnsiballZ_dnf.py'
Jan 27 08:23:38 compute-0 sudo[54151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:38 compute-0 python3.9[54153]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:23:40 compute-0 sudo[54151]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:41 compute-0 sudo[54305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaynjnksdkqxirtdhcgbifwhgtbhoaey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502221.171378-116-15806239667867/AnsiballZ_setup.py'
Jan 27 08:23:41 compute-0 sudo[54305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:41 compute-0 python3.9[54307]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:23:41 compute-0 sudo[54305]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:42 compute-0 sudo[54500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbrfhwrvepthnjutvakwhagfcdiyhes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502222.393051-149-129918142481965/AnsiballZ_file.py'
Jan 27 08:23:42 compute-0 sudo[54500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:43 compute-0 python3.9[54502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:43 compute-0 sudo[54500]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:43 compute-0 sudo[54652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wybasbvndijrtsturnmokeptatjzkumh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502223.21923-173-69311164026420/AnsiballZ_command.py'
Jan 27 08:23:43 compute-0 sudo[54652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:43 compute-0 python3.9[54654]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck2722694853-merged.mount: Deactivated successfully.
Jan 27 08:23:43 compute-0 podman[54655]: 2026-01-27 08:23:43.949771924 +0000 UTC m=+0.058318807 system refresh
Jan 27 08:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:23:43 compute-0 sudo[54652]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:44 compute-0 sudo[54813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdcfegcwzkezdpyjoukrlqfzmehmquyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502224.1746306-197-55813341057521/AnsiballZ_stat.py'
Jan 27 08:23:44 compute-0 sudo[54813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:44 compute-0 python3.9[54815]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:44 compute-0 sudo[54813]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:45 compute-0 sudo[54936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbwlcemvyedgizsuhigofcainlwsknqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502224.1746306-197-55813341057521/AnsiballZ_copy.py'
Jan 27 08:23:45 compute-0 sudo[54936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:45 compute-0 python3.9[54938]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502224.1746306-197-55813341057521/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9d64f492d1c1c49943e7d176bd3e1e26cde62990 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:45 compute-0 sudo[54936]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:46 compute-0 sudo[55088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzdveijxzhszbpzwiafqrenkcbmrihey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502225.7923787-242-98663323232351/AnsiballZ_stat.py'
Jan 27 08:23:46 compute-0 sudo[55088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:46 compute-0 python3.9[55090]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:46 compute-0 sudo[55088]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:46 compute-0 sudo[55211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajrgpqqeurwzprlzfywglwjwbsyuzah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502225.7923787-242-98663323232351/AnsiballZ_copy.py'
Jan 27 08:23:46 compute-0 sudo[55211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:46 compute-0 python3.9[55213]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769502225.7923787-242-98663323232351/.source.conf follow=False _original_basename=registries.conf.j2 checksum=76a61c2dcef8c729f52de4ab2e4a413b55a36d10 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:46 compute-0 sudo[55211]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:47 compute-0 sudo[55363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzoliocvqpcxdghekbcinfmoyionnoly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502227.1311357-290-72870337586082/AnsiballZ_ini_file.py'
Jan 27 08:23:47 compute-0 sudo[55363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:47 compute-0 python3.9[55365]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:47 compute-0 sudo[55363]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:48 compute-0 sudo[55515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljqqnjbohphzzarqaovidostpyptuwwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502227.8654509-290-175336768332234/AnsiballZ_ini_file.py'
Jan 27 08:23:48 compute-0 sudo[55515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:48 compute-0 python3.9[55517]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:48 compute-0 sudo[55515]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:48 compute-0 sudo[55667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtvkxdsojtsfaonxfzvkdtwygwkmkrzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502228.455077-290-184811055491228/AnsiballZ_ini_file.py'
Jan 27 08:23:48 compute-0 sudo[55667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:49 compute-0 python3.9[55669]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:49 compute-0 sudo[55667]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:49 compute-0 sudo[55819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvbdmhjeuknnylaxzunjwivdkvvgmqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502229.1582365-290-90250604678335/AnsiballZ_ini_file.py'
Jan 27 08:23:49 compute-0 sudo[55819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:49 compute-0 python3.9[55821]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:49 compute-0 sudo[55819]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:50 compute-0 sudo[55971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izrlpnnqjiwtrmsyirtmlstmjzjqlvnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502230.461953-383-194196102668502/AnsiballZ_dnf.py'
Jan 27 08:23:50 compute-0 sudo[55971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:50 compute-0 python3.9[55973]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:23:52 compute-0 sudo[55971]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:53 compute-0 sudo[56124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpzgdceyanzyqehshhluxnrmlxjjpyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502232.9131575-416-226792217723297/AnsiballZ_setup.py'
Jan 27 08:23:53 compute-0 sudo[56124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:53 compute-0 python3.9[56126]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:23:53 compute-0 sudo[56124]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:54 compute-0 sudo[56278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gehxymgzzqabavcbaaqdffvllbnjrzcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502233.8638773-440-232127047069107/AnsiballZ_stat.py'
Jan 27 08:23:54 compute-0 sudo[56278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:54 compute-0 python3.9[56280]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:23:54 compute-0 sudo[56278]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:54 compute-0 sudo[56430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsxrjastftiyfisfcjxbyhvjuljxkcib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502234.7082324-467-22740576357830/AnsiballZ_stat.py'
Jan 27 08:23:54 compute-0 sudo[56430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:55 compute-0 python3.9[56432]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:23:55 compute-0 sudo[56430]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:55 compute-0 sudo[56582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utihpeftpqiibjqpbeyexeqqciulxdnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502235.533918-497-46605540399794/AnsiballZ_command.py'
Jan 27 08:23:55 compute-0 sudo[56582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:55 compute-0 python3.9[56584]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:23:56 compute-0 sudo[56582]: pam_unix(sudo:session): session closed for user root
Jan 27 08:23:56 compute-0 sudo[56735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrrmqreyeicvafcbljqsaajqjtgoejsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502236.3956401-527-17645583558564/AnsiballZ_service_facts.py'
Jan 27 08:23:56 compute-0 sudo[56735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:23:57 compute-0 python3.9[56737]: ansible-service_facts Invoked
Jan 27 08:23:57 compute-0 network[56754]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:23:57 compute-0 network[56755]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:23:57 compute-0 network[56756]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:23:59 compute-0 sudo[56735]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:03 compute-0 sudo[57039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcrmtwtdtwhsuhwrqjyquezqihvfkpz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769502243.563519-572-270605027522610/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769502243.563519-572-270605027522610/args'
Jan 27 08:24:03 compute-0 sudo[57039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:03 compute-0 sudo[57039]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:04 compute-0 sudo[57206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vawmkmnfgfetqwetcpomemblvcdlrezq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502244.339669-605-102921847847551/AnsiballZ_dnf.py'
Jan 27 08:24:04 compute-0 sudo[57206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:04 compute-0 python3.9[57208]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:24:06 compute-0 sudo[57206]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:07 compute-0 sudo[57359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sismhszcdyofbnrpszssltwgpxauiogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502246.771356-644-43292407444462/AnsiballZ_package_facts.py'
Jan 27 08:24:07 compute-0 sudo[57359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:07 compute-0 python3.9[57361]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 08:24:07 compute-0 sudo[57359]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:08 compute-0 sudo[57511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhhxnchgfxscfqderygpujordnhinktm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502248.5723395-674-159909016218388/AnsiballZ_stat.py'
Jan 27 08:24:08 compute-0 sudo[57511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:09 compute-0 python3.9[57513]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:09 compute-0 sudo[57511]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:09 compute-0 sudo[57636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthbpsrnnxwmakiojucnvzhlmyercfrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502248.5723395-674-159909016218388/AnsiballZ_copy.py'
Jan 27 08:24:09 compute-0 sudo[57636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:09 compute-0 python3.9[57638]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502248.5723395-674-159909016218388/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:09 compute-0 sudo[57636]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:10 compute-0 sudo[57790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvlvgnshqcynisaepejdybxaiatrupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502250.119349-719-202790569748845/AnsiballZ_stat.py'
Jan 27 08:24:10 compute-0 sudo[57790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:10 compute-0 python3.9[57792]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:10 compute-0 sudo[57790]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:10 compute-0 sudo[57915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvftzzdcgobobmqjconthmloauzojrjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502250.119349-719-202790569748845/AnsiballZ_copy.py'
Jan 27 08:24:10 compute-0 sudo[57915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:11 compute-0 python3.9[57917]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502250.119349-719-202790569748845/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:11 compute-0 sudo[57915]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:12 compute-0 sudo[58069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxfbqqoelekluniqfwgpyxzxyskdzqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502252.3338065-782-162939702878773/AnsiballZ_lineinfile.py'
Jan 27 08:24:12 compute-0 sudo[58069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:13 compute-0 python3.9[58071]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:13 compute-0 sudo[58069]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:14 compute-0 sudo[58223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlrukofbcqwgvfbiekmhwhejopqwapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502254.2243233-827-85904328289201/AnsiballZ_setup.py'
Jan 27 08:24:14 compute-0 sudo[58223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:14 compute-0 python3.9[58225]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:24:15 compute-0 sudo[58223]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:15 compute-0 sudo[58307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrtdhuiwnzinbkxtrdfsxahnhqrtcqsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502254.2243233-827-85904328289201/AnsiballZ_systemd.py'
Jan 27 08:24:15 compute-0 sudo[58307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:15 compute-0 python3.9[58309]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:15 compute-0 sudo[58307]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:17 compute-0 sudo[58462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwtvdykokrqmbdsuqswrzpevoielwdjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502256.8240476-875-31991185337414/AnsiballZ_setup.py'
Jan 27 08:24:17 compute-0 sudo[58462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:17 compute-0 python3.9[58464]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:24:17 compute-0 sudo[58462]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:18 compute-0 sudo[58546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlvwfbwssopwutokfypberfwymlxwcfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502256.8240476-875-31991185337414/AnsiballZ_systemd.py'
Jan 27 08:24:18 compute-0 sudo[58546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:18 compute-0 python3.9[58548]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:24:18 compute-0 chronyd[786]: chronyd exiting
Jan 27 08:24:18 compute-0 systemd[1]: Stopping NTP client/server...
Jan 27 08:24:18 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 27 08:24:18 compute-0 systemd[1]: Stopped NTP client/server.
Jan 27 08:24:18 compute-0 systemd[1]: Starting NTP client/server...
Jan 27 08:24:18 compute-0 chronyd[58557]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 08:24:18 compute-0 chronyd[58557]: Frequency -27.651 +/- 0.786 ppm read from /var/lib/chrony/drift
Jan 27 08:24:18 compute-0 chronyd[58557]: Loaded seccomp filter (level 2)
Jan 27 08:24:18 compute-0 systemd[1]: Started NTP client/server.
Jan 27 08:24:18 compute-0 sudo[58546]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:18 compute-0 sshd-session[53608]: Connection closed by 192.168.122.30 port 56592
Jan 27 08:24:18 compute-0 sshd-session[53605]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:24:18 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 27 08:24:18 compute-0 systemd[1]: session-12.scope: Consumed 24.892s CPU time.
Jan 27 08:24:18 compute-0 systemd-logind[799]: Session 12 logged out. Waiting for processes to exit.
Jan 27 08:24:18 compute-0 systemd-logind[799]: Removed session 12.
Jan 27 08:24:24 compute-0 sshd-session[58583]: Accepted publickey for zuul from 192.168.122.30 port 55454 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:24:24 compute-0 systemd-logind[799]: New session 13 of user zuul.
Jan 27 08:24:24 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 27 08:24:24 compute-0 sshd-session[58583]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:24:24 compute-0 sudo[58736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tieeagrinmfqhjabyybhtigsexdjmuey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502264.3025703-26-150772321655972/AnsiballZ_file.py'
Jan 27 08:24:24 compute-0 sudo[58736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:24 compute-0 python3.9[58738]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:24 compute-0 sudo[58736]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:25 compute-0 sudo[58888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsoxjhegkuwymgctewgiugoeeocssyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502265.4380224-62-188996892092053/AnsiballZ_stat.py'
Jan 27 08:24:25 compute-0 sudo[58888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:26 compute-0 python3.9[58890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:26 compute-0 sudo[58888]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:26 compute-0 sudo[59011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwcwmldwpfcqviukichqwzoyvwhunyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502265.4380224-62-188996892092053/AnsiballZ_copy.py'
Jan 27 08:24:26 compute-0 sudo[59011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:26 compute-0 python3.9[59013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502265.4380224-62-188996892092053/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:26 compute-0 sudo[59011]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:27 compute-0 sshd-session[58586]: Connection closed by 192.168.122.30 port 55454
Jan 27 08:24:27 compute-0 sshd-session[58583]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:24:27 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 27 08:24:27 compute-0 systemd[1]: session-13.scope: Consumed 1.498s CPU time.
Jan 27 08:24:27 compute-0 systemd-logind[799]: Session 13 logged out. Waiting for processes to exit.
Jan 27 08:24:27 compute-0 systemd-logind[799]: Removed session 13.
Jan 27 08:24:33 compute-0 sshd-session[59038]: Accepted publickey for zuul from 192.168.122.30 port 34992 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:24:33 compute-0 systemd-logind[799]: New session 14 of user zuul.
Jan 27 08:24:33 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 27 08:24:33 compute-0 sshd-session[59038]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:24:34 compute-0 python3.9[59191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:24:35 compute-0 sudo[59345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjmnlchsuniopjvxyufyfwvveegobnkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502275.0348916-59-212661876154178/AnsiballZ_file.py'
Jan 27 08:24:35 compute-0 sudo[59345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:35 compute-0 python3.9[59347]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:35 compute-0 sudo[59345]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:36 compute-0 sudo[59520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knyfqtqbjmgvwicfbgwgwirmephpokia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502275.9296231-83-175034598701914/AnsiballZ_stat.py'
Jan 27 08:24:36 compute-0 sudo[59520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:36 compute-0 python3.9[59522]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:36 compute-0 sudo[59520]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:37 compute-0 sudo[59643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohombtxfqrogsasqnyknozkqyishezit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502275.9296231-83-175034598701914/AnsiballZ_copy.py'
Jan 27 08:24:37 compute-0 sudo[59643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:37 compute-0 python3.9[59645]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769502275.9296231-83-175034598701914/.source.json _original_basename=.wrl0ut46 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:37 compute-0 sudo[59643]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:38 compute-0 sudo[59796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qflkdasdfhtuihgdrvsszjlmzpcejaqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502277.763269-152-225485928648469/AnsiballZ_stat.py'
Jan 27 08:24:38 compute-0 sudo[59796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:38 compute-0 python3.9[59798]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:38 compute-0 sudo[59796]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:38 compute-0 sudo[59919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjbkvvdedazpkmgtgboaefglisqzzdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502277.763269-152-225485928648469/AnsiballZ_copy.py'
Jan 27 08:24:38 compute-0 sudo[59919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:38 compute-0 python3.9[59921]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502277.763269-152-225485928648469/.source _original_basename=.9rjcux_t follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:39 compute-0 sudo[59919]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:39 compute-0 sudo[60071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsozrxuslzxtkytexivtrgzdkjlbcrcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502279.2079568-200-79255284287193/AnsiballZ_file.py'
Jan 27 08:24:39 compute-0 sudo[60071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:39 compute-0 python3.9[60073]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:24:39 compute-0 sudo[60071]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:40 compute-0 sudo[60223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtdzcvyioypvwemtctqvwgsyjgxfbqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502279.9405181-224-245142242733890/AnsiballZ_stat.py'
Jan 27 08:24:40 compute-0 sudo[60223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:40 compute-0 python3.9[60225]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:40 compute-0 sudo[60223]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:40 compute-0 sudo[60346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhztbqwngiwwhzrdxnfndvxjechmtyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502279.9405181-224-245142242733890/AnsiballZ_copy.py'
Jan 27 08:24:40 compute-0 sudo[60346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:40 compute-0 python3.9[60348]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769502279.9405181-224-245142242733890/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:24:40 compute-0 sudo[60346]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:41 compute-0 sudo[60498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlhwsovowrrgphiltjkzeukekrhmnbha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502281.078822-224-179630601844874/AnsiballZ_stat.py'
Jan 27 08:24:41 compute-0 sudo[60498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:41 compute-0 python3.9[60500]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:41 compute-0 sudo[60498]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:41 compute-0 sudo[60621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kllzdwaclgaqmjcfhwljtzqxgzfsxrmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502281.078822-224-179630601844874/AnsiballZ_copy.py'
Jan 27 08:24:41 compute-0 sudo[60621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:42 compute-0 python3.9[60623]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769502281.078822-224-179630601844874/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:24:42 compute-0 sudo[60621]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:42 compute-0 sudo[60773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgnyvfbxvratumahorxxqahzduyxwfcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502282.4595807-311-45672678542737/AnsiballZ_file.py'
Jan 27 08:24:42 compute-0 sudo[60773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:42 compute-0 python3.9[60775]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:42 compute-0 sudo[60773]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:43 compute-0 sudo[60925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxmdhgfluyvpslolwcbcjirzzafntcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502283.2235231-335-80910671376512/AnsiballZ_stat.py'
Jan 27 08:24:43 compute-0 sudo[60925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:43 compute-0 python3.9[60927]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:43 compute-0 sudo[60925]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:43 compute-0 sudo[61048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkyzbrcvasdgsxukkuuqmeafkbxkfyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502283.2235231-335-80910671376512/AnsiballZ_copy.py'
Jan 27 08:24:43 compute-0 sudo[61048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:44 compute-0 python3.9[61050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502283.2235231-335-80910671376512/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:44 compute-0 sudo[61048]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:44 compute-0 sudo[61200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pttlafhpizotblpsbgumygrsonilbftx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502284.5093296-380-64201983060163/AnsiballZ_stat.py'
Jan 27 08:24:44 compute-0 sudo[61200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:44 compute-0 python3.9[61202]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:44 compute-0 sudo[61200]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:45 compute-0 sudo[61323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsotvqquiccmatkdfgqerxwclelvpzid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502284.5093296-380-64201983060163/AnsiballZ_copy.py'
Jan 27 08:24:45 compute-0 sudo[61323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:45 compute-0 python3.9[61325]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502284.5093296-380-64201983060163/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:45 compute-0 sudo[61323]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:46 compute-0 sudo[61475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjetcctxyleuibvtzbpuvcjtpfznxudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502285.7955012-425-124820361087345/AnsiballZ_systemd.py'
Jan 27 08:24:46 compute-0 sudo[61475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:46 compute-0 python3.9[61477]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:46 compute-0 systemd[1]: Reloading.
Jan 27 08:24:46 compute-0 systemd-sysv-generator[61506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:24:46 compute-0 systemd-rc-local-generator[61500]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:24:47 compute-0 systemd[1]: Reloading.
Jan 27 08:24:47 compute-0 systemd-rc-local-generator[61541]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:24:47 compute-0 systemd-sysv-generator[61544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:24:47 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 27 08:24:47 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 27 08:24:47 compute-0 sudo[61475]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:48 compute-0 sudo[61701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lghpbckzdeadeoxdbxlmrzihyslzrkxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502288.494329-449-144372178273019/AnsiballZ_stat.py'
Jan 27 08:24:48 compute-0 sudo[61701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:48 compute-0 python3.9[61703]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:49 compute-0 sudo[61701]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:49 compute-0 sudo[61824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbganfuiwyzgkgrfobpyjeljuofnsdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502288.494329-449-144372178273019/AnsiballZ_copy.py'
Jan 27 08:24:49 compute-0 sudo[61824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:49 compute-0 python3.9[61826]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502288.494329-449-144372178273019/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:49 compute-0 sudo[61824]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:50 compute-0 sudo[61976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekwamgwkomfbuuhyjrdewukbuxkxjryo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502289.9848335-494-53000493608325/AnsiballZ_stat.py'
Jan 27 08:24:50 compute-0 sudo[61976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:50 compute-0 python3.9[61978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:24:50 compute-0 sudo[61976]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:50 compute-0 sudo[62099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mycmldhnbxlfxetruznmldihyfhrabyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502289.9848335-494-53000493608325/AnsiballZ_copy.py'
Jan 27 08:24:50 compute-0 sudo[62099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:51 compute-0 python3.9[62101]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502289.9848335-494-53000493608325/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:51 compute-0 sudo[62099]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:51 compute-0 sudo[62251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxzulvykvfzdmfflitnxnhxxganjwdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502291.4584277-539-142515306128478/AnsiballZ_systemd.py'
Jan 27 08:24:51 compute-0 sudo[62251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:24:52 compute-0 python3.9[62253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:52 compute-0 systemd[1]: Reloading.
Jan 27 08:24:52 compute-0 systemd-rc-local-generator[62281]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:24:52 compute-0 systemd-sysv-generator[62284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:24:52 compute-0 systemd[1]: Reloading.
Jan 27 08:24:52 compute-0 systemd-rc-local-generator[62318]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:24:52 compute-0 systemd-sysv-generator[62323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:24:52 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 08:24:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:24:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:24:52 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 08:24:52 compute-0 sudo[62251]: pam_unix(sudo:session): session closed for user root
Jan 27 08:24:54 compute-0 python3.9[62480]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:24:54 compute-0 network[62497]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:24:54 compute-0 network[62498]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:24:54 compute-0 network[62499]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:24:59 compute-0 sudo[62759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsyfovspbdycyopjoeqlnlwvzvlhgwel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502299.5783014-587-24730227494290/AnsiballZ_systemd.py'
Jan 27 08:24:59 compute-0 sudo[62759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:00 compute-0 python3.9[62761]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:25:00 compute-0 systemd[1]: Reloading.
Jan 27 08:25:00 compute-0 systemd-sysv-generator[62795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:25:00 compute-0 systemd-rc-local-generator[62792]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:25:00 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 27 08:25:00 compute-0 iptables.init[62802]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 27 08:25:00 compute-0 iptables.init[62802]: iptables: Flushing firewall rules: [  OK  ]
Jan 27 08:25:00 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 27 08:25:00 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 27 08:25:00 compute-0 sudo[62759]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:01 compute-0 sudo[62996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iobtyswfllrpniffcfshgtnjgxfjngpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502300.8743262-587-66610402877771/AnsiballZ_systemd.py'
Jan 27 08:25:01 compute-0 sudo[62996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:01 compute-0 python3.9[62998]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:25:01 compute-0 sudo[62996]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:02 compute-0 sudo[63150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mepvwojfughutmjgqskujfmgpaezsmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502302.0986826-635-180469689911737/AnsiballZ_systemd.py'
Jan 27 08:25:02 compute-0 sudo[63150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:02 compute-0 python3.9[63152]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:25:02 compute-0 systemd[1]: Reloading.
Jan 27 08:25:02 compute-0 systemd-rc-local-generator[63181]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:25:02 compute-0 systemd-sysv-generator[63184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:25:02 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 27 08:25:02 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 27 08:25:03 compute-0 sudo[63150]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:03 compute-0 sudo[63341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzhkwqgjigfcxhgjqfqezmsrwwzxdhtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502303.3614473-659-103271569848968/AnsiballZ_command.py'
Jan 27 08:25:03 compute-0 sudo[63341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:03 compute-0 python3.9[63343]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:03 compute-0 sudo[63341]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:05 compute-0 sudo[63494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyacovxmgypwxoonlbhmsthlvggyulnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502304.7523801-701-184588916941961/AnsiballZ_stat.py'
Jan 27 08:25:05 compute-0 sudo[63494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:05 compute-0 python3.9[63496]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:05 compute-0 sudo[63494]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:05 compute-0 sudo[63619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lteukwmdkwpfqnpxkmdxrltfisjaypsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502304.7523801-701-184588916941961/AnsiballZ_copy.py'
Jan 27 08:25:05 compute-0 sudo[63619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:05 compute-0 python3.9[63621]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502304.7523801-701-184588916941961/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:05 compute-0 sudo[63619]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:06 compute-0 sudo[63772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjprescelafrwrpewuvevhdkkipbeyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502306.2963545-746-43494526940081/AnsiballZ_systemd.py'
Jan 27 08:25:06 compute-0 sudo[63772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:06 compute-0 python3.9[63774]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:25:06 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 27 08:25:06 compute-0 sshd[1008]: Received SIGHUP; restarting.
Jan 27 08:25:06 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 27 08:25:06 compute-0 sshd[1008]: Server listening on :: port 22.
Jan 27 08:25:06 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 27 08:25:07 compute-0 sudo[63772]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:07 compute-0 sudo[63928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utiolqciwzrprwbbavhvvejtgwersgxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502307.2926192-770-156990883338835/AnsiballZ_file.py'
Jan 27 08:25:07 compute-0 sudo[63928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:07 compute-0 python3.9[63930]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:07 compute-0 sudo[63928]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:08 compute-0 sudo[64080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbmmotbnkgxazfelvrumevlnhppnbhqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502308.0699039-794-26378202663450/AnsiballZ_stat.py'
Jan 27 08:25:08 compute-0 sudo[64080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:08 compute-0 python3.9[64082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:08 compute-0 sudo[64080]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:08 compute-0 sudo[64203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqriscylgmhdoixondqvmuqnbcewzvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502308.0699039-794-26378202663450/AnsiballZ_copy.py'
Jan 27 08:25:08 compute-0 sudo[64203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:09 compute-0 python3.9[64205]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502308.0699039-794-26378202663450/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:09 compute-0 sudo[64203]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:10 compute-0 sudo[64355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btrbtxdjwhwyuoabcfxuhujulgdlabpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502309.7278855-848-9110109155555/AnsiballZ_timezone.py'
Jan 27 08:25:10 compute-0 sudo[64355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:10 compute-0 python3.9[64357]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 08:25:10 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 08:25:10 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 08:25:10 compute-0 sudo[64355]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:11 compute-0 sudo[64511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdwhdgtdqpngncpvzoqohzyfzdnafoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502310.9841528-875-84107204285311/AnsiballZ_file.py'
Jan 27 08:25:11 compute-0 sudo[64511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:11 compute-0 python3.9[64513]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:11 compute-0 sudo[64511]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:12 compute-0 sudo[64663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwwaoqiqmczmkwnweqzoanrhysvzgyfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502311.7872896-899-160453467338509/AnsiballZ_stat.py'
Jan 27 08:25:12 compute-0 sudo[64663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:12 compute-0 python3.9[64665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:12 compute-0 sudo[64663]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:12 compute-0 sudo[64786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwhzgaqigwviofgjxumoawgjhwhbcsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502311.7872896-899-160453467338509/AnsiballZ_copy.py'
Jan 27 08:25:12 compute-0 sudo[64786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:12 compute-0 python3.9[64788]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502311.7872896-899-160453467338509/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:12 compute-0 sudo[64786]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:13 compute-0 sudo[64938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgbmcdelhfybyfnsjufywyrhuvgxcyij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502313.4497578-944-117124426859265/AnsiballZ_stat.py'
Jan 27 08:25:13 compute-0 sudo[64938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:13 compute-0 python3.9[64940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:13 compute-0 sudo[64938]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:14 compute-0 sudo[65061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyabfkrasjzpxvcmygvqizshyzurkdfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502313.4497578-944-117124426859265/AnsiballZ_copy.py'
Jan 27 08:25:14 compute-0 sudo[65061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:14 compute-0 python3.9[65063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769502313.4497578-944-117124426859265/.source.yaml _original_basename=.8q34twnq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:14 compute-0 sudo[65061]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:15 compute-0 sudo[65213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botpktiuhpcpihvolugzxtffyvfwpiaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502314.912947-989-162257872682354/AnsiballZ_stat.py'
Jan 27 08:25:15 compute-0 sudo[65213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:15 compute-0 python3.9[65215]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:15 compute-0 sudo[65213]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:15 compute-0 sudo[65336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kskghsbjrqyosyhgayezlvbcofaymtfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502314.912947-989-162257872682354/AnsiballZ_copy.py'
Jan 27 08:25:15 compute-0 sudo[65336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:15 compute-0 python3.9[65338]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502314.912947-989-162257872682354/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:15 compute-0 sudo[65336]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:16 compute-0 sudo[65488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakzoqenqvazgdnocqycqzpcbxtoziuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502316.5000043-1034-253605508951540/AnsiballZ_command.py'
Jan 27 08:25:16 compute-0 sudo[65488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:17 compute-0 python3.9[65490]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:17 compute-0 sudo[65488]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:17 compute-0 sudo[65641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-antidzmzcnxjoqchuckyymewfcaeytls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502317.5187058-1058-197219302754397/AnsiballZ_command.py'
Jan 27 08:25:17 compute-0 sudo[65641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:17 compute-0 python3.9[65643]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:18 compute-0 sudo[65641]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:18 compute-0 sudo[65794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deccbsictnoyisrdhfbbouedqatndrre ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769502318.5170336-1082-146682532845853/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 08:25:18 compute-0 sudo[65794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:19 compute-0 python3[65796]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:25:19 compute-0 sudo[65794]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:19 compute-0 sudo[65946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvnywkskhzdkchvdthamypmhmouhpsef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502319.4143698-1106-162691083562764/AnsiballZ_stat.py'
Jan 27 08:25:19 compute-0 sudo[65946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:19 compute-0 python3.9[65948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:20 compute-0 sudo[65946]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:20 compute-0 sudo[66069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gixgvfbpzmfohwqmawykczstjbxcbtew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502319.4143698-1106-162691083562764/AnsiballZ_copy.py'
Jan 27 08:25:20 compute-0 sudo[66069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:20 compute-0 python3.9[66071]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502319.4143698-1106-162691083562764/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:20 compute-0 sudo[66069]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:21 compute-0 sudo[66221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-celnlhqhizjorhdzlslpazqfppkrzkxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502320.9641433-1151-195761959419414/AnsiballZ_stat.py'
Jan 27 08:25:21 compute-0 sudo[66221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:21 compute-0 python3.9[66223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:21 compute-0 sudo[66221]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:21 compute-0 sudo[66344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxncttqqdydwnuosfphikknyriqlkqsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502320.9641433-1151-195761959419414/AnsiballZ_copy.py'
Jan 27 08:25:21 compute-0 sudo[66344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:22 compute-0 python3.9[66346]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502320.9641433-1151-195761959419414/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:22 compute-0 sudo[66344]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:22 compute-0 sudo[66496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhtgeaguucfiupdhgvkpsssckkilpbtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502322.463875-1196-222346279424566/AnsiballZ_stat.py'
Jan 27 08:25:22 compute-0 sudo[66496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:22 compute-0 python3.9[66498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:23 compute-0 sudo[66496]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:23 compute-0 sudo[66619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnactdshlxkgyhmttggloetyvzwbvmrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502322.463875-1196-222346279424566/AnsiballZ_copy.py'
Jan 27 08:25:23 compute-0 sudo[66619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:23 compute-0 python3.9[66621]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502322.463875-1196-222346279424566/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:23 compute-0 sudo[66619]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:24 compute-0 sudo[66771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmpwjaeuxgzrpezvmlulffrvvebcfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502324.052004-1241-144029041835252/AnsiballZ_stat.py'
Jan 27 08:25:24 compute-0 sudo[66771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:24 compute-0 python3.9[66773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:24 compute-0 sudo[66771]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:24 compute-0 sudo[66894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcbmeccolmguwkflbhawtjyatwibthrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502324.052004-1241-144029041835252/AnsiballZ_copy.py'
Jan 27 08:25:24 compute-0 sudo[66894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:25 compute-0 python3.9[66896]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502324.052004-1241-144029041835252/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:25 compute-0 sudo[66894]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:25 compute-0 sudo[67046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vssrkcqpsemyzbedezhoyezaqtkuhakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502325.4663064-1286-174906721950918/AnsiballZ_stat.py'
Jan 27 08:25:25 compute-0 sudo[67046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:25 compute-0 python3.9[67048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:25:25 compute-0 sudo[67046]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:26 compute-0 sudo[67169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvgufzfaskuitnjyvrtzqztthvfsykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502325.4663064-1286-174906721950918/AnsiballZ_copy.py'
Jan 27 08:25:26 compute-0 sudo[67169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:26 compute-0 python3.9[67171]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502325.4663064-1286-174906721950918/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:26 compute-0 sudo[67169]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:27 compute-0 sudo[67321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpyzdrjsravuzgrulvezhvkejwdlfece ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502326.8688018-1331-243414238905188/AnsiballZ_file.py'
Jan 27 08:25:27 compute-0 sudo[67321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:27 compute-0 python3.9[67323]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:27 compute-0 sudo[67321]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:27 compute-0 sudo[67473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjcuyncarbvjhnfcsycuoxqnufmdmpgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502327.632093-1355-249715590446034/AnsiballZ_command.py'
Jan 27 08:25:27 compute-0 sudo[67473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:28 compute-0 python3.9[67475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:28 compute-0 sudo[67473]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:28 compute-0 sudo[67632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getczrsievkpiizngdrldgaouxydlgdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502328.5454721-1379-125314327367683/AnsiballZ_blockinfile.py'
Jan 27 08:25:28 compute-0 sudo[67632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:29 compute-0 python3.9[67634]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:29 compute-0 sudo[67632]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:29 compute-0 sudo[67785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbdjjdomegdysecpcojfxwhenzpzshds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502329.6223195-1406-74135985447985/AnsiballZ_file.py'
Jan 27 08:25:29 compute-0 sudo[67785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:30 compute-0 python3.9[67787]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:30 compute-0 sudo[67785]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:30 compute-0 sudo[67937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuifwthstzdermzrsjgstmegoadpjomi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502330.2663193-1406-4800896901307/AnsiballZ_file.py'
Jan 27 08:25:30 compute-0 sudo[67937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:30 compute-0 python3.9[67939]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:30 compute-0 sudo[67937]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:31 compute-0 sudo[68089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmomommyrbgqlzmojhcuhbtzkgdzlhnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502331.2196903-1451-126551973106001/AnsiballZ_mount.py'
Jan 27 08:25:31 compute-0 sudo[68089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:31 compute-0 python3.9[68091]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:25:31 compute-0 sudo[68089]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:32 compute-0 sudo[68242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlcaiyujdidrghvlnvqelwsrzexrnxsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502332.0395906-1451-57063895918558/AnsiballZ_mount.py'
Jan 27 08:25:32 compute-0 sudo[68242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:32 compute-0 python3.9[68244]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:25:32 compute-0 sudo[68242]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:33 compute-0 sshd-session[59041]: Connection closed by 192.168.122.30 port 34992
Jan 27 08:25:33 compute-0 sshd-session[59038]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:25:33 compute-0 systemd-logind[799]: Session 14 logged out. Waiting for processes to exit.
Jan 27 08:25:33 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 27 08:25:33 compute-0 systemd[1]: session-14.scope: Consumed 32.907s CPU time.
Jan 27 08:25:33 compute-0 systemd-logind[799]: Removed session 14.
Jan 27 08:25:38 compute-0 sshd-session[68270]: Accepted publickey for zuul from 192.168.122.30 port 54646 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:25:38 compute-0 systemd-logind[799]: New session 15 of user zuul.
Jan 27 08:25:38 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 27 08:25:38 compute-0 sshd-session[68270]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:25:39 compute-0 sudo[68423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzuwxwqoucmotmbgtaesuqvydfaluwzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502338.7235732-23-207102090743645/AnsiballZ_tempfile.py'
Jan 27 08:25:39 compute-0 sudo[68423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:39 compute-0 python3.9[68425]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 08:25:39 compute-0 sudo[68423]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:40 compute-0 sudo[68575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaksjvsvkjmqfmcibwadgzujtgfneuqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502339.5998566-59-51568209043332/AnsiballZ_stat.py'
Jan 27 08:25:40 compute-0 sudo[68575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:40 compute-0 python3.9[68577]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:25:40 compute-0 sudo[68575]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:40 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 08:25:41 compute-0 sudo[68729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbmvoxdqapzpbnpdzvliynqmuqaqlhlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502340.587137-89-60733874972770/AnsiballZ_setup.py'
Jan 27 08:25:41 compute-0 sudo[68729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:41 compute-0 python3.9[68731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:25:41 compute-0 sudo[68729]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:42 compute-0 sudo[68881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpfgrrxbrbykpmdwjnpoumdkiunigtbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502341.8715136-114-3388962655479/AnsiballZ_blockinfile.py'
Jan 27 08:25:42 compute-0 sudo[68881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:42 compute-0 python3.9[68883]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXsXvncMJ0UzA2kZWT6PmXqnKs4jKM0Sr01zB/XUpOk9hr3myA119m0OywalXpo8EKjtKewhszXHOe+836O9Oaro5nUthxWGueffDrPmvv3U+olo/D4WZHmtWqYMeuY9WZQYg3SkzROARzA5D1LBzcnj89JWIK3wozoImndBu1dy4wvoUl5pvJJb/8wpn6MW1qztsckSYeFyxKIjKUInt63Co2RDrpcNLx0ym4RH3nR/eak0lQJzFqg7dNSRKSnyq2KkAoqgXxqlBeMV3zXbvoM4T9/RDQNHhBTvj4Sz1gx0h6tQwZD5xvHsTUTpb8IY/WjYRb5bwfCqaY6GkxPXGgUZtOiQpgqVgIm/A0s6yMkCX+vgg7T5bDe42bXQ3T0yzYCXqXqKr7283USNHtAxvyS7HJ4+1jQooCUK7zLgzrxvzsa1Jbm3fD/DPpub8RUl4IHHsAn4snBFk0i5918tARMsoCVGeSSsIUpm0Lb6oP25Svt9veUbsUUIyBnZ9C9bk=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBAV6NbK6BWJ7Z6z/q1/WahjUGnZCfaJADbVIPDztAu
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoQeb8dUBWroibOpcXZLZW2jU7oc/D85IJfotnbJ13c+NsTa9bvtSQuFOZBSiJxFZBz8g5tHP1dX2zBDygyl0w=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeZVzJUmBr8rscwr/tyZkmyMtp8DsjVmT1Go2ik8PcNvKzMeefyxTZ3Uxpz1L0QWjiy3C+1IbTOPHtmNePnEYO6JumkmaqkVrXkXV0/NVfC4CBUpsyDRaGJ+STPhy6KJ71JuESt2ey61/P5BgKdolhn/ypkCXuLPOeGUOU/zr8Z6r0kUHQTMxbo22ElMOZ7E9WcU+1dhg1QOmJNjeRPf5zA6aWl70dc4DQz/MCGoEugK3/BHxi4LTHbTmqaAyxWPk5eIUGJ9ZhO368KneUkjHMCQJPuxrk5Nfi78IKmoXabmrRz+toYCoJZwHgN312halgSgYBiNObCY2wFuIa/1yyzH7fZ8t9izWftTnv6e6rIq1pwdRfGDEFMS3qfGJFbpVZd0vZRLxZcCSRj6nSGTsgYxrnKxiXN9DlnYL4mL8IobHYLgYvjEJRxSFNcvJeEec2dHf39g94gYUp5A6g0JSUQua9uGuUw3u2hjuw/hqWJ1+Qtqz+v6aAr5vDKoXnDnk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINsL+jpg9IqB3QHcoTIKXMaJ36zCdaJtKTD57FBkukfF
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKI5+7ocma2vDS2iiMtTo2VOfmNxAY3b9rJYJIYe1s2vpy4//aKnloQB1/36D/Ob9gEKU2cs2feFXzaHWoMb8Fw=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZQYFLJNCSqPubvkgV+mSAWsKyHEn5zEd8PjcBARYbKex1zE5KP5kYHD5RqkMSEGbaLqB01216SE5OLsJdp7zDtEvYvOiuzSdilxqK8FneqyHJuL3BVd0SK0Ou88elrYxCMog6DNui3gSOw4hb71J9rM8CeUo3ou61yWHQq1IuGW/eZdsN2zZRhtvYy6TmeozTA9iybgebjYHIk98nQOhocTi1H5QmICMFzzGX0A74QafSrIBBed8sHQ3ElScEdK/RfmmsHGKwkVkuEP34cvD+Agd8VSaQ5cSYjtTBzgNWSxd3MmLtX7xbx02sW6AixTXdc0Rg6z0wnrM5Rw2ACynusV8xc5JPUwMcPxzOVKVPuO4PahYvMmYIq/5Cn6rSakk4KiSkeHr5QU7XTn/b6Vg1UtxU4m2FUpqnuF8kn6VN4evt7snG9oN8IUBsFoTvviMHNT0oSz3yBCp3CQ72GzJJTt2p6B3fAJRuil9lWxe/Q6nzOAkcSes+tI54/Yx3AJ0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIU4q8/exihWO+LCEgVZGFOu7nizMQ7PRBYf9UmhVfWu
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHy9oQDK7bN+bEbzpMT6Qkq1ZjkuLkqy7zdXiLz4z1/0zlHVkEt5G4ADDr6nb9SxllvpTitSX4S/ovd8Jbtwv6w=
                                             create=True mode=0644 path=/tmp/ansible.6u421bo7 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:42 compute-0 sudo[68881]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:43 compute-0 sudo[69033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzvkwfjvgxobnexuzswfpbasdkizqekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502342.8714223-138-47442791629365/AnsiballZ_command.py'
Jan 27 08:25:43 compute-0 sudo[69033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:43 compute-0 python3.9[69035]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6u421bo7' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:43 compute-0 sudo[69033]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:44 compute-0 sudo[69187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arjomfmpavahtidrevmgpmgofmzvtqhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502343.8301866-162-75701276318342/AnsiballZ_file.py'
Jan 27 08:25:44 compute-0 sudo[69187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:44 compute-0 python3.9[69189]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6u421bo7 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:44 compute-0 sudo[69187]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:44 compute-0 sshd-session[68273]: Connection closed by 192.168.122.30 port 54646
Jan 27 08:25:44 compute-0 sshd-session[68270]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:25:44 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 27 08:25:44 compute-0 systemd[1]: session-15.scope: Consumed 3.107s CPU time.
Jan 27 08:25:44 compute-0 systemd-logind[799]: Session 15 logged out. Waiting for processes to exit.
Jan 27 08:25:44 compute-0 systemd-logind[799]: Removed session 15.
Jan 27 08:25:50 compute-0 sshd-session[69214]: Accepted publickey for zuul from 192.168.122.30 port 58894 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:25:50 compute-0 systemd-logind[799]: New session 16 of user zuul.
Jan 27 08:25:50 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 27 08:25:50 compute-0 sshd-session[69214]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:25:51 compute-0 python3.9[69367]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:25:53 compute-0 sudo[69521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhznmmlszomcqvjblcdjxwmrlhahqqvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502352.4000466-56-121981480729072/AnsiballZ_systemd.py'
Jan 27 08:25:53 compute-0 sudo[69521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:53 compute-0 python3.9[69523]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 08:25:53 compute-0 sudo[69521]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:53 compute-0 sudo[69675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrfmymjidzvccdzfmzcrfxkxihirpdcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502353.6468625-80-126592155845442/AnsiballZ_systemd.py'
Jan 27 08:25:53 compute-0 sudo[69675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:54 compute-0 python3.9[69677]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:25:54 compute-0 sudo[69675]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:55 compute-0 sudo[69828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fslkxvwxstvzqzmhyfmvnamxobeqwdkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502354.6495264-107-156926304680735/AnsiballZ_command.py'
Jan 27 08:25:55 compute-0 sudo[69828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:55 compute-0 python3.9[69830]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:55 compute-0 sudo[69828]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:56 compute-0 sudo[69981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpxixlqjzncydpsywvabvaloqtayiil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502355.6277037-131-227373151069227/AnsiballZ_stat.py'
Jan 27 08:25:56 compute-0 sudo[69981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:56 compute-0 python3.9[69983]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:25:56 compute-0 sudo[69981]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:56 compute-0 sudo[70135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfjxlhalrgvybibquvqviradnabufsac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502356.5696316-155-151675863027637/AnsiballZ_command.py'
Jan 27 08:25:56 compute-0 sudo[70135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:56 compute-0 python3.9[70137]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:25:57 compute-0 sudo[70135]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:57 compute-0 sudo[70290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diapwmplmymvivgvoygqwunoybmumopi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502357.3954966-179-250712754906924/AnsiballZ_file.py'
Jan 27 08:25:57 compute-0 sudo[70290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:25:58 compute-0 python3.9[70292]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:25:58 compute-0 sudo[70290]: pam_unix(sudo:session): session closed for user root
Jan 27 08:25:58 compute-0 sshd-session[69217]: Connection closed by 192.168.122.30 port 58894
Jan 27 08:25:58 compute-0 sshd-session[69214]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:25:58 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 27 08:25:58 compute-0 systemd[1]: session-16.scope: Consumed 3.950s CPU time.
Jan 27 08:25:58 compute-0 systemd-logind[799]: Session 16 logged out. Waiting for processes to exit.
Jan 27 08:25:58 compute-0 systemd-logind[799]: Removed session 16.
Jan 27 08:26:04 compute-0 sshd-session[70317]: Accepted publickey for zuul from 192.168.122.30 port 56486 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:26:04 compute-0 systemd-logind[799]: New session 17 of user zuul.
Jan 27 08:26:04 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 27 08:26:04 compute-0 sshd-session[70317]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:26:05 compute-0 python3.9[70470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:26:06 compute-0 sudo[70624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmzsvowdjybidqzkvjqjozqargjudus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502365.8483546-62-127317891034104/AnsiballZ_setup.py'
Jan 27 08:26:06 compute-0 sudo[70624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:06 compute-0 python3.9[70626]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:26:06 compute-0 sudo[70624]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:07 compute-0 sudo[70708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upanobhfvahzwgkmneqnoaviobdtimka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502365.8483546-62-127317891034104/AnsiballZ_dnf.py'
Jan 27 08:26:07 compute-0 sudo[70708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:07 compute-0 python3.9[70710]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:26:08 compute-0 sudo[70708]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:09 compute-0 python3.9[70861]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:26:11 compute-0 python3.9[71012]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:26:11 compute-0 python3.9[71162]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:26:11 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:26:12 compute-0 python3.9[71313]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:26:13 compute-0 sshd-session[70320]: Connection closed by 192.168.122.30 port 56486
Jan 27 08:26:13 compute-0 sshd-session[70317]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:26:13 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 27 08:26:13 compute-0 systemd[1]: session-17.scope: Consumed 5.987s CPU time.
Jan 27 08:26:13 compute-0 systemd-logind[799]: Session 17 logged out. Waiting for processes to exit.
Jan 27 08:26:13 compute-0 systemd-logind[799]: Removed session 17.
Jan 27 08:26:22 compute-0 sshd-session[71338]: Accepted publickey for zuul from 38.102.83.162 port 43894 ssh2: RSA SHA256:DNK1vimKiSKrooFcnqxgdgoquKxzk/KTmMzYIUmiqbw
Jan 27 08:26:22 compute-0 systemd-logind[799]: New session 18 of user zuul.
Jan 27 08:26:22 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 27 08:26:22 compute-0 sshd-session[71338]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:26:22 compute-0 sudo[71414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvbfpadlvkgfqikxolfmcocgfdswssdb ; /usr/bin/python3'
Jan 27 08:26:22 compute-0 sudo[71414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:23 compute-0 useradd[71418]: new group: name=ceph-admin, GID=42478
Jan 27 08:26:23 compute-0 useradd[71418]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 27 08:26:24 compute-0 sudo[71414]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:24 compute-0 sudo[71500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlgrmfwjebuztnnjqzylxhrunqwqffxi ; /usr/bin/python3'
Jan 27 08:26:24 compute-0 sudo[71500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:24 compute-0 sudo[71500]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:25 compute-0 sudo[71573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drxtgkseqmdgvmrnjwgkchldgcnuvprd ; /usr/bin/python3'
Jan 27 08:26:25 compute-0 sudo[71573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:25 compute-0 sudo[71573]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:25 compute-0 sudo[71623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxchxasaypghccaldqqoqzfywynqmjk ; /usr/bin/python3'
Jan 27 08:26:25 compute-0 sudo[71623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:25 compute-0 sudo[71623]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:25 compute-0 sudo[71649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtfmqnsyyofpyffpxtqjulaatanrfzxa ; /usr/bin/python3'
Jan 27 08:26:25 compute-0 sudo[71649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:26 compute-0 sudo[71649]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:26 compute-0 sudo[71675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaogjxefmhtmrqkmqebnaxtqsivdmcmi ; /usr/bin/python3'
Jan 27 08:26:26 compute-0 sudo[71675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:26 compute-0 sudo[71675]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:26 compute-0 sudo[71701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylrjyrworfphgwpvdzyzooxgkwlfszhy ; /usr/bin/python3'
Jan 27 08:26:26 compute-0 sudo[71701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:27 compute-0 sudo[71701]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:27 compute-0 sudo[71779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxioasoxzppmmwcrnlbubyheyehlegl ; /usr/bin/python3'
Jan 27 08:26:27 compute-0 sudo[71779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:27 compute-0 sudo[71779]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:27 compute-0 sudo[71852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvclxjobmwrqolhkzaygncdvrcepjubf ; /usr/bin/python3'
Jan 27 08:26:27 compute-0 sudo[71852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:27 compute-0 chronyd[58557]: Selected source 138.197.164.54 (pool.ntp.org)
Jan 27 08:26:28 compute-0 sudo[71852]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:28 compute-0 sudo[71954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orjdwthoearflrprmyglzrmnwexwjtvc ; /usr/bin/python3'
Jan 27 08:26:28 compute-0 sudo[71954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:28 compute-0 sudo[71954]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:28 compute-0 sudo[72027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywgiiuofgshuvdaehsjxkwhngftodiri ; /usr/bin/python3'
Jan 27 08:26:28 compute-0 sudo[72027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:29 compute-0 sudo[72027]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:29 compute-0 sudo[72077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwlyppvafvkuficmzlxsqbccixgieuf ; /usr/bin/python3'
Jan 27 08:26:29 compute-0 sudo[72077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:30 compute-0 python3[72079]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:26:30 compute-0 sudo[72077]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:31 compute-0 sudo[72172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sisyhxdpccazitwamhbikkwkaohhkvem ; /usr/bin/python3'
Jan 27 08:26:31 compute-0 sudo[72172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:31 compute-0 python3[72174]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:26:33 compute-0 sudo[72172]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:33 compute-0 sudo[72199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhmxatwzlyiopcxgfmuropiayerxorym ; /usr/bin/python3'
Jan 27 08:26:33 compute-0 sudo[72199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:33 compute-0 python3[72201]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:26:33 compute-0 sudo[72199]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:33 compute-0 sudo[72225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftupcvqrkbkgzgspxvhacjrmakcvaarb ; /usr/bin/python3'
Jan 27 08:26:33 compute-0 sudo[72225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:34 compute-0 python3[72227]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:26:34 compute-0 kernel: loop: module loaded
Jan 27 08:26:34 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Jan 27 08:26:34 compute-0 sudo[72225]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:34 compute-0 sudo[72260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpoubilptshyylspxcvtevegfbawhtey ; /usr/bin/python3'
Jan 27 08:26:34 compute-0 sudo[72260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:34 compute-0 python3[72262]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:26:34 compute-0 lvm[72265]: PV /dev/loop3 not used.
Jan 27 08:26:34 compute-0 lvm[72267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:26:34 compute-0 lvm[72277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:26:34 compute-0 lvm[72277]: VG ceph_vg0 finished
Jan 27 08:26:34 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 27 08:26:34 compute-0 lvm[72276]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 27 08:26:34 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 27 08:26:34 compute-0 sudo[72260]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:35 compute-0 sudo[72354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmryklkzhhiospiitwzlheiyrvumhptw ; /usr/bin/python3'
Jan 27 08:26:35 compute-0 sudo[72354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:35 compute-0 python3[72356]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:26:35 compute-0 sudo[72354]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:35 compute-0 sudo[72427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbwsvubejupnmmwuteyxdlbcfjrijfkm ; /usr/bin/python3'
Jan 27 08:26:35 compute-0 sudo[72427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:35 compute-0 python3[72429]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502395.046767-36915-147607485070363/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:35 compute-0 sudo[72427]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:36 compute-0 sudo[72477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwjhklfhvklpywckoqbrvwxpjipfdfep ; /usr/bin/python3'
Jan 27 08:26:36 compute-0 sudo[72477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:36 compute-0 python3[72479]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:26:36 compute-0 systemd[1]: Reloading.
Jan 27 08:26:36 compute-0 systemd-rc-local-generator[72505]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:36 compute-0 systemd-sysv-generator[72509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:36 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 27 08:26:36 compute-0 bash[72519]: /dev/loop3: [64513]:4328451 (/var/lib/ceph-osd-0.img)
Jan 27 08:26:37 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 27 08:26:37 compute-0 lvm[72520]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:26:37 compute-0 lvm[72520]: VG ceph_vg0 finished
Jan 27 08:26:37 compute-0 sudo[72477]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:39 compute-0 python3[72544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:26:42 compute-0 sudo[72635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eujpxszlzrdrtuiypbcqspxocfcknewj ; /usr/bin/python3'
Jan 27 08:26:42 compute-0 sudo[72635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:43 compute-0 python3[72637]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:26:44 compute-0 groupadd[72643]: group added to /etc/group: name=cephadm, GID=993
Jan 27 08:26:44 compute-0 groupadd[72643]: group added to /etc/gshadow: name=cephadm
Jan 27 08:26:44 compute-0 groupadd[72643]: new group: name=cephadm, GID=993
Jan 27 08:26:44 compute-0 useradd[72650]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 27 08:26:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:26:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:26:45 compute-0 sudo[72635]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:45 compute-0 sudo[72745]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzvllzkirhdwvpdlmycxsxfutmidzrsf ; /usr/bin/python3'
Jan 27 08:26:45 compute-0 sudo[72745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:45 compute-0 python3[72747]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:26:45 compute-0 sudo[72745]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:45 compute-0 sudo[72773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhhcwmdytxncqpirvtfyzautuampcaz ; /usr/bin/python3'
Jan 27 08:26:45 compute-0 sudo[72773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:26:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:26:45 compute-0 systemd[1]: run-r89becff979cb49fd8a5221684c8da69f.service: Deactivated successfully.
Jan 27 08:26:46 compute-0 python3[72775]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:26:46 compute-0 sudo[72773]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:46 compute-0 sudo[72837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpcwjkriahdqhjdvdeqwpbahrjvugrm ; /usr/bin/python3'
Jan 27 08:26:46 compute-0 sudo[72837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:46 compute-0 python3[72839]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:46 compute-0 sudo[72837]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:47 compute-0 sudo[72863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cppxytpplskcifiiktnpctimfjyqywdh ; /usr/bin/python3'
Jan 27 08:26:47 compute-0 sudo[72863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:26:47 compute-0 python3[72865]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:47 compute-0 sudo[72863]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:47 compute-0 sudo[72941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfuczlwieprfhxqlkopswkmbavkdowhj ; /usr/bin/python3'
Jan 27 08:26:47 compute-0 sudo[72941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:47 compute-0 python3[72943]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:26:47 compute-0 sudo[72941]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:48 compute-0 sudo[73014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkekqjfezsrywssfinxnsclkwjofwwyv ; /usr/bin/python3'
Jan 27 08:26:48 compute-0 sudo[73014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:48 compute-0 python3[73016]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502407.6250598-37106-178648313687245/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:48 compute-0 sudo[73014]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:48 compute-0 sudo[73116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pypipoysxjohguthqgqdugmykngbtunj ; /usr/bin/python3'
Jan 27 08:26:48 compute-0 sudo[73116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:48 compute-0 python3[73118]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:26:48 compute-0 sudo[73116]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:49 compute-0 sudo[73189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysihilfsopgzlfnxmvxbumrwdpkozghw ; /usr/bin/python3'
Jan 27 08:26:49 compute-0 sudo[73189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:49 compute-0 python3[73191]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502408.6957068-37124-91684717860548/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:49 compute-0 sudo[73189]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:49 compute-0 sudo[73239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puptfoflxuemqtgkthxkhgohhutswkep ; /usr/bin/python3'
Jan 27 08:26:49 compute-0 sudo[73239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:49 compute-0 python3[73241]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:26:49 compute-0 sudo[73239]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:49 compute-0 sudo[73267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psvysbhzoirxcnhmhirlhdwxsdxvwglz ; /usr/bin/python3'
Jan 27 08:26:49 compute-0 sudo[73267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:49 compute-0 python3[73269]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:26:49 compute-0 sudo[73267]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:50 compute-0 sudo[73295]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytlixekbuaqbgsthmwqabeftpbazxcai ; /usr/bin/python3'
Jan 27 08:26:50 compute-0 sudo[73295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:50 compute-0 python3[73297]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:26:50 compute-0 sudo[73295]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:50 compute-0 python3[73323]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:26:50 compute-0 sudo[73347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heukkslzftgkmbqmacwtcasogybacjyz ; /usr/bin/python3'
Jan 27 08:26:50 compute-0 sudo[73347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:26:50 compute-0 python3[73349]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:26:51 compute-0 sshd-session[73365]: Accepted publickey for ceph-admin from 192.168.122.100 port 51534 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:26:51 compute-0 systemd-logind[799]: New session 19 of user ceph-admin.
Jan 27 08:26:51 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 27 08:26:51 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 27 08:26:51 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 27 08:26:51 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 27 08:26:51 compute-0 systemd[73369]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:26:51 compute-0 systemd[73369]: Queued start job for default target Main User Target.
Jan 27 08:26:51 compute-0 systemd[73369]: Created slice User Application Slice.
Jan 27 08:26:51 compute-0 systemd[73369]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 08:26:51 compute-0 systemd[73369]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 08:26:51 compute-0 systemd[73369]: Reached target Paths.
Jan 27 08:26:51 compute-0 systemd[73369]: Reached target Timers.
Jan 27 08:26:51 compute-0 systemd[73369]: Starting D-Bus User Message Bus Socket...
Jan 27 08:26:51 compute-0 systemd[73369]: Starting Create User's Volatile Files and Directories...
Jan 27 08:26:51 compute-0 systemd[73369]: Listening on D-Bus User Message Bus Socket.
Jan 27 08:26:51 compute-0 systemd[73369]: Reached target Sockets.
Jan 27 08:26:51 compute-0 systemd[73369]: Finished Create User's Volatile Files and Directories.
Jan 27 08:26:51 compute-0 systemd[73369]: Reached target Basic System.
Jan 27 08:26:51 compute-0 systemd[73369]: Reached target Main User Target.
Jan 27 08:26:51 compute-0 systemd[73369]: Startup finished in 112ms.
Jan 27 08:26:51 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 27 08:26:51 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 27 08:26:51 compute-0 sshd-session[73365]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:26:51 compute-0 sudo[73385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 27 08:26:51 compute-0 sudo[73385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:26:51 compute-0 sudo[73385]: pam_unix(sudo:session): session closed for user root
Jan 27 08:26:51 compute-0 sshd-session[73384]: Received disconnect from 192.168.122.100 port 51534:11: disconnected by user
Jan 27 08:26:51 compute-0 sshd-session[73384]: Disconnected from user ceph-admin 192.168.122.100 port 51534
Jan 27 08:26:51 compute-0 sshd-session[73365]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 27 08:26:51 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 27 08:26:51 compute-0 systemd-logind[799]: Session 19 logged out. Waiting for processes to exit.
Jan 27 08:26:51 compute-0 systemd-logind[799]: Removed session 19.
Jan 27 08:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3267180920-lower\x2dmapped.mount: Deactivated successfully.
Jan 27 08:27:01 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 27 08:27:01 compute-0 systemd[73369]: Activating special unit Exit the Session...
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped target Main User Target.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped target Basic System.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped target Paths.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped target Sockets.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped target Timers.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 08:27:01 compute-0 systemd[73369]: Closed D-Bus User Message Bus Socket.
Jan 27 08:27:01 compute-0 systemd[73369]: Stopped Create User's Volatile Files and Directories.
Jan 27 08:27:01 compute-0 systemd[73369]: Removed slice User Application Slice.
Jan 27 08:27:01 compute-0 systemd[73369]: Reached target Shutdown.
Jan 27 08:27:01 compute-0 systemd[73369]: Finished Exit the Session.
Jan 27 08:27:01 compute-0 systemd[73369]: Reached target Exit the Session.
Jan 27 08:27:01 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 27 08:27:01 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 27 08:27:01 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 27 08:27:01 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 27 08:27:01 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 27 08:27:01 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 27 08:27:01 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 27 08:27:16 compute-0 podman[73423]: 2026-01-27 08:27:16.440158781 +0000 UTC m=+24.984539984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:16 compute-0 podman[73485]: 2026-01-27 08:27:16.482213799 +0000 UTC m=+0.022035978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:16 compute-0 podman[73485]: 2026-01-27 08:27:16.581358554 +0000 UTC m=+0.121180733 container create aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0 (image=quay.io/ceph/ceph:v18, name=quizzical_feistel, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:16 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 27 08:27:16 compute-0 systemd[1]: Started libpod-conmon-aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0.scope.
Jan 27 08:27:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:16 compute-0 podman[73485]: 2026-01-27 08:27:16.691942012 +0000 UTC m=+0.231764201 container init aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0 (image=quay.io/ceph/ceph:v18, name=quizzical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:16 compute-0 podman[73485]: 2026-01-27 08:27:16.698854865 +0000 UTC m=+0.238677044 container start aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0 (image=quay.io/ceph/ceph:v18, name=quizzical_feistel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:16 compute-0 podman[73485]: 2026-01-27 08:27:16.708710682 +0000 UTC m=+0.248532861 container attach aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0 (image=quay.io/ceph/ceph:v18, name=quizzical_feistel, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:17 compute-0 quizzical_feistel[73501]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 27 08:27:17 compute-0 systemd[1]: libpod-aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0.scope: Deactivated successfully.
Jan 27 08:27:17 compute-0 podman[73485]: 2026-01-27 08:27:17.016324847 +0000 UTC m=+0.556147026 container died aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0 (image=quay.io/ceph/ceph:v18, name=quizzical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40e621c14578ff7ba00f3f4bad4f39457d10767ed6578e910f365531472e201-merged.mount: Deactivated successfully.
Jan 27 08:27:17 compute-0 podman[73485]: 2026-01-27 08:27:17.17494001 +0000 UTC m=+0.714762189 container remove aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0 (image=quay.io/ceph/ceph:v18, name=quizzical_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:17 compute-0 systemd[1]: libpod-conmon-aecac71bb6f2b5a951254525e23e53c0e43eafc8c06c19c8c8788d9baeadeba0.scope: Deactivated successfully.
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.27028763 +0000 UTC m=+0.072040959 container create c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9 (image=quay.io/ceph/ceph:v18, name=silly_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.220065023 +0000 UTC m=+0.021818372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:17 compute-0 systemd[1]: Started libpod-conmon-c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9.scope.
Jan 27 08:27:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.364004774 +0000 UTC m=+0.165758123 container init c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9 (image=quay.io/ceph/ceph:v18, name=silly_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.369521339 +0000 UTC m=+0.171274668 container start c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9 (image=quay.io/ceph/ceph:v18, name=silly_liskov, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:27:17 compute-0 silly_liskov[73534]: 167 167
Jan 27 08:27:17 compute-0 systemd[1]: libpod-c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9.scope: Deactivated successfully.
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.383674966 +0000 UTC m=+0.185428345 container attach c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9 (image=quay.io/ceph/ceph:v18, name=silly_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.384227062 +0000 UTC m=+0.185980431 container died c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9 (image=quay.io/ceph/ceph:v18, name=silly_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:17 compute-0 podman[73518]: 2026-01-27 08:27:17.557127632 +0000 UTC m=+0.358880961 container remove c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9 (image=quay.io/ceph/ceph:v18, name=silly_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:17 compute-0 systemd[1]: libpod-conmon-c452684a47d0aa13dcbf01d639faa17f621c3e3543110393b717b611386129d9.scope: Deactivated successfully.
Jan 27 08:27:17 compute-0 podman[73551]: 2026-01-27 08:27:17.628179893 +0000 UTC m=+0.040317281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:18 compute-0 podman[73551]: 2026-01-27 08:27:18.452222202 +0000 UTC m=+0.864359490 container create c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c (image=quay.io/ceph/ceph:v18, name=keen_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:18 compute-0 systemd[1]: Started libpod-conmon-c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c.scope.
Jan 27 08:27:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:18 compute-0 podman[73551]: 2026-01-27 08:27:18.691334849 +0000 UTC m=+1.103472177 container init c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c (image=quay.io/ceph/ceph:v18, name=keen_fermat, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:18 compute-0 podman[73551]: 2026-01-27 08:27:18.701633937 +0000 UTC m=+1.113771225 container start c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c (image=quay.io/ceph/ceph:v18, name=keen_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:27:18 compute-0 keen_fermat[73567]: AQDmdnhp7ITsKhAAZays7QCLRlxEiNQhjT3HSg==
Jan 27 08:27:18 compute-0 systemd[1]: libpod-c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c.scope: Deactivated successfully.
Jan 27 08:27:18 compute-0 podman[73551]: 2026-01-27 08:27:18.770949779 +0000 UTC m=+1.183087087 container attach c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c (image=quay.io/ceph/ceph:v18, name=keen_fermat, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:27:18 compute-0 podman[73551]: 2026-01-27 08:27:18.771610348 +0000 UTC m=+1.183747646 container died c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c (image=quay.io/ceph/ceph:v18, name=keen_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d796634796f146ef3cd503d53b93c1164cf127460f21599b136e6e21f4018f8d-merged.mount: Deactivated successfully.
Jan 27 08:27:18 compute-0 podman[73551]: 2026-01-27 08:27:18.901622499 +0000 UTC m=+1.313759787 container remove c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c (image=quay.io/ceph/ceph:v18, name=keen_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:18 compute-0 systemd[1]: libpod-conmon-c3f76a4a4977f00135460a423ec644ca928feca164fd986ab3705e8d8544c58c.scope: Deactivated successfully.
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:19.029921982 +0000 UTC m=+0.096307218 container create a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325 (image=quay.io/ceph/ceph:v18, name=ecstatic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:18.974662134 +0000 UTC m=+0.041047440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:19 compute-0 systemd[1]: Started libpod-conmon-a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325.scope.
Jan 27 08:27:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:19.133565525 +0000 UTC m=+0.199950831 container init a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325 (image=quay.io/ceph/ceph:v18, name=ecstatic_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:19.143185935 +0000 UTC m=+0.209571191 container start a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325 (image=quay.io/ceph/ceph:v18, name=ecstatic_chandrasekhar, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:19.157713132 +0000 UTC m=+0.224098358 container attach a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325 (image=quay.io/ceph/ceph:v18, name=ecstatic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:27:19 compute-0 ecstatic_chandrasekhar[73602]: AQDndnhpje4kCxAAQ0NKnPa/bxHVBNP3BPd68w==
Jan 27 08:27:19 compute-0 systemd[1]: libpod-a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325.scope: Deactivated successfully.
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:19.193273077 +0000 UTC m=+0.259658333 container died a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325 (image=quay.io/ceph/ceph:v18, name=ecstatic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-74af8227a7ad96795fa235d8139cfbe534e41986d380352bcc3666cbc750353e-merged.mount: Deactivated successfully.
Jan 27 08:27:19 compute-0 podman[73586]: 2026-01-27 08:27:19.782387087 +0000 UTC m=+0.848772343 container remove a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325 (image=quay.io/ceph/ceph:v18, name=ecstatic_chandrasekhar, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:19 compute-0 podman[73619]: 2026-01-27 08:27:19.864864327 +0000 UTC m=+0.061625967 container create dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0 (image=quay.io/ceph/ceph:v18, name=determined_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:19 compute-0 systemd[1]: Started libpod-conmon-dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0.scope.
Jan 27 08:27:19 compute-0 podman[73619]: 2026-01-27 08:27:19.825287178 +0000 UTC m=+0.022048868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:19 compute-0 podman[73619]: 2026-01-27 08:27:19.977812071 +0000 UTC m=+0.174573691 container init dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0 (image=quay.io/ceph/ceph:v18, name=determined_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:27:19 compute-0 podman[73619]: 2026-01-27 08:27:19.982694317 +0000 UTC m=+0.179455907 container start dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0 (image=quay.io/ceph/ceph:v18, name=determined_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:19 compute-0 podman[73619]: 2026-01-27 08:27:19.995552027 +0000 UTC m=+0.192313617 container attach dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0 (image=quay.io/ceph/ceph:v18, name=determined_wozniak, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 08:27:20 compute-0 determined_wozniak[73636]: AQDodnhp2l0QABAArPBanKFd1PKcQEEYk93Bfg==
Jan 27 08:27:20 compute-0 systemd[1]: libpod-dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 podman[73619]: 2026-01-27 08:27:20.004057635 +0000 UTC m=+0.200819245 container died dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0 (image=quay.io/ceph/ceph:v18, name=determined_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:27:20 compute-0 podman[73619]: 2026-01-27 08:27:20.0953004 +0000 UTC m=+0.292062000 container remove dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0 (image=quay.io/ceph/ceph:v18, name=determined_wozniak, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 08:27:20 compute-0 systemd[1]: libpod-conmon-dc367f512641ce568817ecd3ff885826cbc1ab30f9c81b160374a225abd6c0c0.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 systemd[1]: libpod-conmon-a9930aca102a83614fcf110a35fb7f9a94f436bf5962568ea91db2dcfb346325.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.203226913 +0000 UTC m=+0.077381108 container create 8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf (image=quay.io/ceph/ceph:v18, name=cranky_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:20 compute-0 systemd[1]: Started libpod-conmon-8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf.scope.
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.162264986 +0000 UTC m=+0.036419211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9e0c8ecfd96ee3656945f77766e5ad57f97a88290656f9861ef9f9ca5714f2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.301448054 +0000 UTC m=+0.175602259 container init 8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf (image=quay.io/ceph/ceph:v18, name=cranky_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.306801714 +0000 UTC m=+0.180955899 container start 8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf (image=quay.io/ceph/ceph:v18, name=cranky_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.320527619 +0000 UTC m=+0.194681824 container attach 8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf (image=quay.io/ceph/ceph:v18, name=cranky_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:20 compute-0 cranky_rhodes[73672]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 27 08:27:20 compute-0 cranky_rhodes[73672]: setting min_mon_release = pacific
Jan 27 08:27:20 compute-0 cranky_rhodes[73672]: /usr/bin/monmaptool: set fsid to 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:20 compute-0 cranky_rhodes[73672]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 27 08:27:20 compute-0 systemd[1]: libpod-8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.334792898 +0000 UTC m=+0.208947113 container died 8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf (image=quay.io/ceph/ceph:v18, name=cranky_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:27:20 compute-0 podman[73656]: 2026-01-27 08:27:20.43195389 +0000 UTC m=+0.306108075 container remove 8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf (image=quay.io/ceph/ceph:v18, name=cranky_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:20 compute-0 systemd[1]: libpod-conmon-8c8fb2afa84fdfcdbf6518862f896ee73b9dcaf03ac8eb2d38e24d5a307f7ecf.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-db69e05f38d8c2931805b5f4972076bcd52c809d326ab4ebbc3aaffda42b2d95-merged.mount: Deactivated successfully.
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.497841065 +0000 UTC m=+0.042193613 container create 79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f (image=quay.io/ceph/ceph:v18, name=wonderful_proskuriakova, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:20 compute-0 systemd[1]: Started libpod-conmon-79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f.scope.
Jan 27 08:27:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b40bc9923443eb04cf0af7a9f7af1bb69aff562aa25c55da0c13c6ddd2c9d79/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b40bc9923443eb04cf0af7a9f7af1bb69aff562aa25c55da0c13c6ddd2c9d79/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b40bc9923443eb04cf0af7a9f7af1bb69aff562aa25c55da0c13c6ddd2c9d79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b40bc9923443eb04cf0af7a9f7af1bb69aff562aa25c55da0c13c6ddd2c9d79/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.474972834 +0000 UTC m=+0.019325412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.572577098 +0000 UTC m=+0.116929676 container init 79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f (image=quay.io/ceph/ceph:v18, name=wonderful_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.578214816 +0000 UTC m=+0.122567394 container start 79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f (image=quay.io/ceph/ceph:v18, name=wonderful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.583779242 +0000 UTC m=+0.128131820 container attach 79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f (image=quay.io/ceph/ceph:v18, name=wonderful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:20 compute-0 systemd[1]: libpod-79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.728780203 +0000 UTC m=+0.273132761 container died 79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f (image=quay.io/ceph/ceph:v18, name=wonderful_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b40bc9923443eb04cf0af7a9f7af1bb69aff562aa25c55da0c13c6ddd2c9d79-merged.mount: Deactivated successfully.
Jan 27 08:27:20 compute-0 podman[73691]: 2026-01-27 08:27:20.829911585 +0000 UTC m=+0.374264153 container remove 79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f (image=quay.io/ceph/ceph:v18, name=wonderful_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:20 compute-0 systemd[1]: libpod-conmon-79e96a169c1d14572ce3e0909ddcff9d8f67d96b889befd7958c6e8caa5a839f.scope: Deactivated successfully.
Jan 27 08:27:20 compute-0 systemd[1]: Reloading.
Jan 27 08:27:21 compute-0 systemd-rc-local-generator[73772]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:21 compute-0 systemd-sysv-generator[73777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:21 compute-0 systemd[1]: Reloading.
Jan 27 08:27:21 compute-0 systemd-sysv-generator[73817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:21 compute-0 systemd-rc-local-generator[73814]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:21 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 27 08:27:21 compute-0 systemd[1]: Reloading.
Jan 27 08:27:21 compute-0 systemd-rc-local-generator[73852]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:21 compute-0 systemd-sysv-generator[73855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:21 compute-0 systemd[1]: Reached target Ceph cluster 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:27:21 compute-0 systemd[1]: Reloading.
Jan 27 08:27:21 compute-0 systemd-rc-local-generator[73890]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:21 compute-0 systemd-sysv-generator[73893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:21 compute-0 systemd[1]: Reloading.
Jan 27 08:27:21 compute-0 systemd-sysv-generator[73931]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:21 compute-0 systemd-rc-local-generator[73927]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:22 compute-0 systemd[1]: Created slice Slice /system/ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:27:22 compute-0 systemd[1]: Reached target System Time Set.
Jan 27 08:27:22 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 27 08:27:22 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:22 compute-0 podman[73983]: 2026-01-27 08:27:22.324337449 +0000 UTC m=+0.052041998 container create 6f6584e459bb42b4ff167d11b7efefb99637f05a6af786c00e050526f947039a (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f28d54ca344e816a9edb59dde926e0fcd76f7f405e6a6b2a1c40c51f048244/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f28d54ca344e816a9edb59dde926e0fcd76f7f405e6a6b2a1c40c51f048244/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f28d54ca344e816a9edb59dde926e0fcd76f7f405e6a6b2a1c40c51f048244/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f28d54ca344e816a9edb59dde926e0fcd76f7f405e6a6b2a1c40c51f048244/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 podman[73983]: 2026-01-27 08:27:22.298005672 +0000 UTC m=+0.025710241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:22 compute-0 podman[73983]: 2026-01-27 08:27:22.473443685 +0000 UTC m=+0.201148284 container init 6f6584e459bb42b4ff167d11b7efefb99637f05a6af786c00e050526f947039a (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:27:22 compute-0 podman[73983]: 2026-01-27 08:27:22.478426285 +0000 UTC m=+0.206130824 container start 6f6584e459bb42b4ff167d11b7efefb99637f05a6af786c00e050526f947039a (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: pidfile_write: ignore empty --pid-file
Jan 27 08:27:22 compute-0 ceph-mon[74003]: load: jerasure load: lrc 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Git sha 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: DB SUMMARY
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: DB Session ID:  OVNIDOHF42EZEOHVO309
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                                     Options.env: 0x55dde7942c40
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                                Options.info_log: 0x55dde9d68ec0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                                 Options.wal_dir: 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                    Options.write_buffer_manager: 0x55dde9d78b40
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                               Options.row_cache: None
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                              Options.wal_filter: None
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.wal_compression: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.max_background_jobs: 2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.max_total_wal_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:       Options.compaction_readahead_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Compression algorithms supported:
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kZSTD supported: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kXpressCompression supported: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kBZip2Compression supported: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kLZ4Compression supported: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kZlibCompression supported: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kLZ4HCCompression supported: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         kSnappyCompression supported: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:           Options.merge_operator: 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:        Options.compaction_filter: None
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dde9d68aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55dde9d611f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:        Options.write_buffer_size: 33554432
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:  Options.max_write_buffer_number: 2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.compression: NoCompression
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.num_levels: 7
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6831b649-debc-4b07-a687-adb2cf43b3c1
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502442520835, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 27 08:27:22 compute-0 bash[73983]: 6f6584e459bb42b4ff167d11b7efefb99637f05a6af786c00e050526f947039a
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502442597051, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "OVNIDOHF42EZEOHVO309", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502442597217, "job": 1, "event": "recovery_finished"}
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 27 08:27:22 compute-0 systemd[1]: Started Ceph mon.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55dde9d8ae00
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: DB pointer 0x55dde9e14000
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:27:22 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.076       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.076       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.076       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.076       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dde9d611f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 08:27:22 compute-0 ceph-mon[74003]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@-1(???) e0 preinit fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 27 08:27:22 compute-0 ceph-mon[74003]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:27:22 compute-0 ceph-mon[74003]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 27 08:27:22 compute-0 podman[74025]: 2026-01-27 08:27:22.67112552 +0000 UTC m=+0.020656279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:22 compute-0 podman[74025]: 2026-01-27 08:27:22.785541735 +0000 UTC m=+0.135072474 container create 4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a (image=quay.io/ceph/ceph:v18, name=naughty_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 27 08:27:22 compute-0 ceph-mon[74003]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 27 08:27:22 compute-0 systemd[1]: Started libpod-conmon-4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a.scope.
Jan 27 08:27:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979fa5e4734492d7ec3b6297d7c5a7b17dc64da5fe38a86a117e56250e195d53/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979fa5e4734492d7ec3b6297d7c5a7b17dc64da5fe38a86a117e56250e195d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979fa5e4734492d7ec3b6297d7c5a7b17dc64da5fe38a86a117e56250e195d53/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:27:22 compute-0 ceph-mon[74003]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:27:22 compute-0 ceph-mon[74003]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:27:22 compute-0 ceph-mon[74003]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-27T08:27:20.623050Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 27 08:27:23 compute-0 podman[74025]: 2026-01-27 08:27:23.06741573 +0000 UTC m=+0.416946559 container init 4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a (image=quay.io/ceph/ceph:v18, name=naughty_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:23 compute-0 podman[74025]: 2026-01-27 08:27:23.081462683 +0000 UTC m=+0.430993422 container start 4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a (image=quay.io/ceph/ceph:v18, name=naughty_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).mds e1 new map
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 27 08:27:23 compute-0 ceph-mon[74003]: log_channel(cluster) log [DBG] : fsmap 
Jan 27 08:27:23 compute-0 podman[74025]: 2026-01-27 08:27:23.236774304 +0000 UTC m=+0.586305043 container attach 4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a (image=quay.io/ceph/ceph:v18, name=naughty_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mkfs 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 27 08:27:23 compute-0 ceph-mon[74003]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 27 08:27:23 compute-0 ceph-mon[74003]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:27:23 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 27 08:27:23 compute-0 ceph-mon[74003]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145525280' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:   cluster:
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     id:     281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     health: HEALTH_OK
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:  
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:   services:
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     mon: 1 daemons, quorum compute-0 (age 0.763384s)
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     mgr: no daemons active
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     osd: 0 osds: 0 up, 0 in
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:  
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:   data:
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     pools:   0 pools, 0 pgs
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     objects: 0 objects, 0 B
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     usage:   0 B used, 0 B / 0 B avail
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:     pgs:     
Jan 27 08:27:23 compute-0 naughty_burnell[74059]:  
Jan 27 08:27:23 compute-0 systemd[1]: libpod-4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a.scope: Deactivated successfully.
Jan 27 08:27:23 compute-0 podman[74025]: 2026-01-27 08:27:23.684615967 +0000 UTC m=+1.034146706 container died 4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a (image=quay.io/ceph/ceph:v18, name=naughty_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-979fa5e4734492d7ec3b6297d7c5a7b17dc64da5fe38a86a117e56250e195d53-merged.mount: Deactivated successfully.
Jan 27 08:27:23 compute-0 podman[74025]: 2026-01-27 08:27:23.933826796 +0000 UTC m=+1.283357535 container remove 4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a (image=quay.io/ceph/ceph:v18, name=naughty_burnell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:23 compute-0 systemd[1]: libpod-conmon-4ec3165727ee99d61fefc700d79a9d21f1c8fa6d73ba2492c147288bfad3344a.scope: Deactivated successfully.
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:24.008627251 +0000 UTC m=+0.055993080 container create 7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02 (image=quay.io/ceph/ceph:v18, name=loving_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:27:24 compute-0 systemd[1]: Started libpod-conmon-7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02.scope.
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:23.976318166 +0000 UTC m=+0.023684015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd9b554da483c871d91b309024027b3b437cb93be18cf36385044e91dedaa39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd9b554da483c871d91b309024027b3b437cb93be18cf36385044e91dedaa39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd9b554da483c871d91b309024027b3b437cb93be18cf36385044e91dedaa39/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd9b554da483c871d91b309024027b3b437cb93be18cf36385044e91dedaa39/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:24.14213316 +0000 UTC m=+0.189499039 container init 7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02 (image=quay.io/ceph/ceph:v18, name=loving_turing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:24.147162001 +0000 UTC m=+0.194527840 container start 7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02 (image=quay.io/ceph/ceph:v18, name=loving_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:24.152440519 +0000 UTC m=+0.199806368 container attach 7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02 (image=quay.io/ceph/ceph:v18, name=loving_turing, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 08:27:24 compute-0 ceph-mon[74003]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:27:24 compute-0 ceph-mon[74003]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:27:24 compute-0 ceph-mon[74003]: fsmap 
Jan 27 08:27:24 compute-0 ceph-mon[74003]: osdmap e1: 0 total, 0 up, 0 in
Jan 27 08:27:24 compute-0 ceph-mon[74003]: mgrmap e1: no daemons active
Jan 27 08:27:24 compute-0 ceph-mon[74003]: from='client.? 192.168.122.100:0/3145525280' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 27 08:27:24 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 27 08:27:24 compute-0 ceph-mon[74003]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2929639059' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 27 08:27:24 compute-0 ceph-mon[74003]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2929639059' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:27:24 compute-0 loving_turing[74112]: 
Jan 27 08:27:24 compute-0 loving_turing[74112]: [global]
Jan 27 08:27:24 compute-0 loving_turing[74112]:         fsid = 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:24 compute-0 loving_turing[74112]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 27 08:27:24 compute-0 systemd[1]: libpod-7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02.scope: Deactivated successfully.
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:24.570517197 +0000 UTC m=+0.617883026 container died 7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02 (image=quay.io/ceph/ceph:v18, name=loving_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 08:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbd9b554da483c871d91b309024027b3b437cb93be18cf36385044e91dedaa39-merged.mount: Deactivated successfully.
Jan 27 08:27:24 compute-0 podman[74096]: 2026-01-27 08:27:24.76875127 +0000 UTC m=+0.816117099 container remove 7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02 (image=quay.io/ceph/ceph:v18, name=loving_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:24 compute-0 systemd[1]: libpod-conmon-7250b77651b89af43108d9686ca8162b3b2cef78c36c1ccb8d46cc0648ab4e02.scope: Deactivated successfully.
Jan 27 08:27:24 compute-0 podman[74151]: 2026-01-27 08:27:24.903474383 +0000 UTC m=+0.116129924 container create 12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b (image=quay.io/ceph/ceph:v18, name=stoic_goldberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:24 compute-0 podman[74151]: 2026-01-27 08:27:24.809218563 +0000 UTC m=+0.021874134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:25 compute-0 systemd[1]: Started libpod-conmon-12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b.scope.
Jan 27 08:27:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7759fe613b2787bb6193ef1df578c52d31a39034910cacb082f93b40d385081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7759fe613b2787bb6193ef1df578c52d31a39034910cacb082f93b40d385081/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7759fe613b2787bb6193ef1df578c52d31a39034910cacb082f93b40d385081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7759fe613b2787bb6193ef1df578c52d31a39034910cacb082f93b40d385081/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:25 compute-0 podman[74151]: 2026-01-27 08:27:25.034941395 +0000 UTC m=+0.247596996 container init 12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b (image=quay.io/ceph/ceph:v18, name=stoic_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:25 compute-0 podman[74151]: 2026-01-27 08:27:25.041019475 +0000 UTC m=+0.253675016 container start 12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b (image=quay.io/ceph/ceph:v18, name=stoic_goldberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:27:25 compute-0 podman[74151]: 2026-01-27 08:27:25.059921865 +0000 UTC m=+0.272577436 container attach 12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b (image=quay.io/ceph/ceph:v18, name=stoic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:27:25 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:27:25 compute-0 ceph-mon[74003]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426293360' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:27:25 compute-0 systemd[1]: libpod-12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b.scope: Deactivated successfully.
Jan 27 08:27:25 compute-0 podman[74151]: 2026-01-27 08:27:25.416695987 +0000 UTC m=+0.629351568 container died 12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b (image=quay.io/ceph/ceph:v18, name=stoic_goldberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7759fe613b2787bb6193ef1df578c52d31a39034910cacb082f93b40d385081-merged.mount: Deactivated successfully.
Jan 27 08:27:25 compute-0 ceph-mon[74003]: from='client.? 192.168.122.100:0/2929639059' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 27 08:27:25 compute-0 ceph-mon[74003]: from='client.? 192.168.122.100:0/2929639059' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:27:25 compute-0 ceph-mon[74003]: from='client.? 192.168.122.100:0/2426293360' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:27:25 compute-0 podman[74151]: 2026-01-27 08:27:25.632343637 +0000 UTC m=+0.844999178 container remove 12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b (image=quay.io/ceph/ceph:v18, name=stoic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:25 compute-0 systemd[1]: libpod-conmon-12d97daa70b4b9ea2f41c1bac791460abbbe51b8b8cf041ac8b2362a5357b91b.scope: Deactivated successfully.
Jan 27 08:27:25 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:27:25 compute-0 ceph-mon[74003]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 27 08:27:25 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 27 08:27:25 compute-0 ceph-mon[74003]: mon.compute-0@0(leader) e1 shutdown
Jan 27 08:27:25 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0[73999]: 2026-01-27T08:27:25.814+0000 7f779ab26640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 27 08:27:25 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 27 08:27:25 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0[73999]: 2026-01-27T08:27:25.814+0000 7f779ab26640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 27 08:27:25 compute-0 ceph-mon[74003]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 27 08:27:25 compute-0 podman[74236]: 2026-01-27 08:27:25.846724201 +0000 UTC m=+0.070391953 container died 6f6584e459bb42b4ff167d11b7efefb99637f05a6af786c00e050526f947039a (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2f28d54ca344e816a9edb59dde926e0fcd76f7f405e6a6b2a1c40c51f048244-merged.mount: Deactivated successfully.
Jan 27 08:27:25 compute-0 podman[74236]: 2026-01-27 08:27:25.88167126 +0000 UTC m=+0.105339012 container remove 6f6584e459bb42b4ff167d11b7efefb99637f05a6af786c00e050526f947039a (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:25 compute-0 bash[74236]: ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0
Jan 27 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:27:25 compute-0 systemd[1]: ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mon.compute-0.service: Deactivated successfully.
Jan 27 08:27:25 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:27:26 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:27:26 compute-0 podman[74337]: 2026-01-27 08:27:26.179547762 +0000 UTC m=+0.038947032 container create b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305638fd534a3429c0246ae8470d998c9f738631386e5cc37f7338e8e7a5974c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305638fd534a3429c0246ae8470d998c9f738631386e5cc37f7338e8e7a5974c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305638fd534a3429c0246ae8470d998c9f738631386e5cc37f7338e8e7a5974c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305638fd534a3429c0246ae8470d998c9f738631386e5cc37f7338e8e7a5974c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 podman[74337]: 2026-01-27 08:27:26.237141745 +0000 UTC m=+0.096541055 container init b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:27:26 compute-0 podman[74337]: 2026-01-27 08:27:26.243363679 +0000 UTC m=+0.102762949 container start b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:26 compute-0 bash[74337]: b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8
Jan 27 08:27:26 compute-0 podman[74337]: 2026-01-27 08:27:26.160451518 +0000 UTC m=+0.019850798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:27:26 compute-0 ceph-mon[74357]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: pidfile_write: ignore empty --pid-file
Jan 27 08:27:26 compute-0 ceph-mon[74357]: load: jerasure load: lrc 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Git sha 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: DB SUMMARY
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: DB Session ID:  9F6VHEUNOOK9VA53XR25
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                                     Options.env: 0x55f59c986c40
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                                Options.info_log: 0x55f59eb4b040
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                                 Options.wal_dir: 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                    Options.write_buffer_manager: 0x55f59eb5ab40
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                               Options.row_cache: None
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                              Options.wal_filter: None
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.wal_compression: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.max_background_jobs: 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.max_total_wal_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:       Options.compaction_readahead_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Compression algorithms supported:
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kZSTD supported: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kXpressCompression supported: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kBZip2Compression supported: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kLZ4Compression supported: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kZlibCompression supported: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kLZ4HCCompression supported: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         kSnappyCompression supported: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:           Options.merge_operator: 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:        Options.compaction_filter: None
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f59eb4ac40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f59eb431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:        Options.write_buffer_size: 33554432
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:  Options.max_write_buffer_number: 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.compression: NoCompression
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.num_levels: 7
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6831b649-debc-4b07-a687-adb2cf43b3c1
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502446289374, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502446295255, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502446, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502446295546, "job": 1, "event": "recovery_finished"}
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f59eb6ce00
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: DB pointer 0x55f59ebf6000
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:27:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f59eb431f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 08:27:26 compute-0 ceph-mon[74357]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???) e1 preinit fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).mds e1 new map
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 27 08:27:26 compute-0 ceph-mon[74357]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:27:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:27:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 27 08:27:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 27 08:27:26 compute-0 podman[74358]: 2026-01-27 08:27:26.327706822 +0000 UTC m=+0.048695255 container create 152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3 (image=quay.io/ceph/ceph:v18, name=compassionate_hawking, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:27:26 compute-0 systemd[1]: Started libpod-conmon-152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3.scope.
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:27:26 compute-0 ceph-mon[74357]: fsmap 
Jan 27 08:27:26 compute-0 ceph-mon[74357]: osdmap e1: 0 total, 0 up, 0 in
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mgrmap e1: no daemons active
Jan 27 08:27:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ef98bc14871ff6528c77cfa7568b8e94c99c824de1e2e5758063b57d092fc8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ef98bc14871ff6528c77cfa7568b8e94c99c824de1e2e5758063b57d092fc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ef98bc14871ff6528c77cfa7568b8e94c99c824de1e2e5758063b57d092fc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:26 compute-0 podman[74358]: 2026-01-27 08:27:26.310304644 +0000 UTC m=+0.031293107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:26 compute-0 podman[74358]: 2026-01-27 08:27:26.404000078 +0000 UTC m=+0.124988541 container init 152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3 (image=quay.io/ceph/ceph:v18, name=compassionate_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:27:26 compute-0 podman[74358]: 2026-01-27 08:27:26.4122595 +0000 UTC m=+0.133247933 container start 152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3 (image=quay.io/ceph/ceph:v18, name=compassionate_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:27:26 compute-0 podman[74358]: 2026-01-27 08:27:26.415434479 +0000 UTC m=+0.136423012 container attach 152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3 (image=quay.io/ceph/ceph:v18, name=compassionate_hawking, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:27:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 27 08:27:26 compute-0 systemd[1]: libpod-152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3.scope: Deactivated successfully.
Jan 27 08:27:26 compute-0 podman[74358]: 2026-01-27 08:27:26.88356661 +0000 UTC m=+0.604555063 container died 152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3 (image=quay.io/ceph/ceph:v18, name=compassionate_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3ef98bc14871ff6528c77cfa7568b8e94c99c824de1e2e5758063b57d092fc8-merged.mount: Deactivated successfully.
Jan 27 08:27:27 compute-0 podman[74358]: 2026-01-27 08:27:27.452681809 +0000 UTC m=+1.173670242 container remove 152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3 (image=quay.io/ceph/ceph:v18, name=compassionate_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:27:27 compute-0 podman[74449]: 2026-01-27 08:27:27.505090247 +0000 UTC m=+0.029682283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:27 compute-0 podman[74449]: 2026-01-27 08:27:27.626300782 +0000 UTC m=+0.150892828 container create d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa (image=quay.io/ceph/ceph:v18, name=lucid_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 27 08:27:27 compute-0 systemd[1]: Started libpod-conmon-d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa.scope.
Jan 27 08:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fda2e9555aed56c93819cc8a2d886acdfbc5ba4d70cfa3f93e61e172f0fe9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fda2e9555aed56c93819cc8a2d886acdfbc5ba4d70cfa3f93e61e172f0fe9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fda2e9555aed56c93819cc8a2d886acdfbc5ba4d70cfa3f93e61e172f0fe9c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:27 compute-0 podman[74449]: 2026-01-27 08:27:27.792399983 +0000 UTC m=+0.316992009 container init d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa (image=quay.io/ceph/ceph:v18, name=lucid_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:27:27 compute-0 podman[74449]: 2026-01-27 08:27:27.7983552 +0000 UTC m=+0.322947206 container start d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa (image=quay.io/ceph/ceph:v18, name=lucid_babbage, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:27:27 compute-0 podman[74449]: 2026-01-27 08:27:27.850992134 +0000 UTC m=+0.375584140 container attach d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa (image=quay.io/ceph/ceph:v18, name=lucid_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:27:27 compute-0 systemd[1]: libpod-conmon-152a12bda56d10cd888ad09c5c80adc94c6e5e34c3f20b56b138995539b74cf3.scope: Deactivated successfully.
Jan 27 08:27:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 27 08:27:28 compute-0 systemd[1]: libpod-d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa.scope: Deactivated successfully.
Jan 27 08:27:28 compute-0 podman[74491]: 2026-01-27 08:27:28.236534413 +0000 UTC m=+0.020950209 container died d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa (image=quay.io/ceph/ceph:v18, name=lucid_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-54fda2e9555aed56c93819cc8a2d886acdfbc5ba4d70cfa3f93e61e172f0fe9c-merged.mount: Deactivated successfully.
Jan 27 08:27:28 compute-0 podman[74491]: 2026-01-27 08:27:28.510468484 +0000 UTC m=+0.294884260 container remove d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa (image=quay.io/ceph/ceph:v18, name=lucid_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:27:28 compute-0 systemd[1]: libpod-conmon-d802143c6e5b77746458511149fa1170bb8a66415d165337bad375c567f156aa.scope: Deactivated successfully.
Jan 27 08:27:28 compute-0 systemd[1]: Reloading.
Jan 27 08:27:28 compute-0 systemd-sysv-generator[74534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:28 compute-0 systemd-rc-local-generator[74529]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:29 compute-0 systemd[1]: Reloading.
Jan 27 08:27:29 compute-0 systemd-rc-local-generator[74572]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:29 compute-0 systemd-sysv-generator[74576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:29 compute-0 systemd[1]: Starting Ceph mgr.compute-0.vujqxq for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:27:29 compute-0 podman[74631]: 2026-01-27 08:27:29.658843917 +0000 UTC m=+0.072058089 container create 3429ee293a25f1df2e70ba59705567adb65473d671002dcca4f587eb75ffcdcc (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:27:29 compute-0 podman[74631]: 2026-01-27 08:27:29.609445604 +0000 UTC m=+0.022659796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ac2433501a9b99f9676a555c1760e4373dbe52ea4fbf8a171a79c00f7bbcfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ac2433501a9b99f9676a555c1760e4373dbe52ea4fbf8a171a79c00f7bbcfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ac2433501a9b99f9676a555c1760e4373dbe52ea4fbf8a171a79c00f7bbcfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ac2433501a9b99f9676a555c1760e4373dbe52ea4fbf8a171a79c00f7bbcfe/merged/var/lib/ceph/mgr/ceph-compute-0.vujqxq supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:29 compute-0 podman[74631]: 2026-01-27 08:27:29.858018214 +0000 UTC m=+0.271232406 container init 3429ee293a25f1df2e70ba59705567adb65473d671002dcca4f587eb75ffcdcc (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:29 compute-0 podman[74631]: 2026-01-27 08:27:29.863169909 +0000 UTC m=+0.276384081 container start 3429ee293a25f1df2e70ba59705567adb65473d671002dcca4f587eb75ffcdcc (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:27:29 compute-0 bash[74631]: 3429ee293a25f1df2e70ba59705567adb65473d671002dcca4f587eb75ffcdcc
Jan 27 08:27:29 compute-0 systemd[1]: Started Ceph mgr.compute-0.vujqxq for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:27:29 compute-0 ceph-mgr[74650]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:27:29 compute-0 ceph-mgr[74650]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 27 08:27:29 compute-0 ceph-mgr[74650]: pidfile_write: ignore empty --pid-file
Jan 27 08:27:30 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'alerts'
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:30.06024631 +0000 UTC m=+0.114462808 container create 19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0 (image=quay.io/ceph/ceph:v18, name=thirsty_sanderson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:29.969285571 +0000 UTC m=+0.023502089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:30 compute-0 systemd[1]: Started libpod-conmon-19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0.scope.
Jan 27 08:27:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52b766ff11c88777100160eb167a09988459ce4e3b64b749909587b6568dd54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52b766ff11c88777100160eb167a09988459ce4e3b64b749909587b6568dd54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52b766ff11c88777100160eb167a09988459ce4e3b64b749909587b6568dd54/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:30.168648175 +0000 UTC m=+0.222864693 container init 19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0 (image=quay.io/ceph/ceph:v18, name=thirsty_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:30.176487585 +0000 UTC m=+0.230704093 container start 19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0 (image=quay.io/ceph/ceph:v18, name=thirsty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:30.194001205 +0000 UTC m=+0.248217733 container attach 19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0 (image=quay.io/ceph/ceph:v18, name=thirsty_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:27:30 compute-0 ceph-mgr[74650]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 27 08:27:30 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'balancer'
Jan 27 08:27:30 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:30.373+0000 7f1df8c1e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 27 08:27:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199777025' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]: 
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]: {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "health": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "status": "HEALTH_OK",
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "checks": {},
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "mutes": []
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "election_epoch": 5,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "quorum": [
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         0
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     ],
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "quorum_names": [
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "compute-0"
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     ],
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "quorum_age": 4,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "monmap": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "epoch": 1,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "min_mon_release_name": "reef",
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_mons": 1
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "osdmap": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "epoch": 1,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_osds": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_up_osds": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "osd_up_since": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_in_osds": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "osd_in_since": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_remapped_pgs": 0
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "pgmap": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "pgs_by_state": [],
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_pgs": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_pools": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_objects": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "data_bytes": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "bytes_used": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "bytes_avail": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "bytes_total": 0
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "fsmap": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "epoch": 1,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "by_rank": [],
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "up:standby": 0
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "mgrmap": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "available": false,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "num_standbys": 0,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "modules": [
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:             "iostat",
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:             "nfs",
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:             "restful"
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         ],
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "services": {}
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "servicemap": {
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "epoch": 1,
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:         "services": {}
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     },
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]:     "progress_events": {}
Jan 27 08:27:30 compute-0 thirsty_sanderson[74692]: }
Jan 27 08:27:30 compute-0 systemd[1]: libpod-19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0.scope: Deactivated successfully.
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:30.671643043 +0000 UTC m=+0.725859541 container died 19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0 (image=quay.io/ceph/ceph:v18, name=thirsty_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:27:30 compute-0 ceph-mgr[74650]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 27 08:27:30 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'cephadm'
Jan 27 08:27:30 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:30.694+0000 7f1df8c1e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 27 08:27:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3199777025' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52b766ff11c88777100160eb167a09988459ce4e3b64b749909587b6568dd54-merged.mount: Deactivated successfully.
Jan 27 08:27:30 compute-0 podman[74675]: 2026-01-27 08:27:30.964472554 +0000 UTC m=+1.018689052 container remove 19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0 (image=quay.io/ceph/ceph:v18, name=thirsty_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:27:30 compute-0 systemd[1]: libpod-conmon-19df0b4b1bdce307ca84d006de155e0311be1d7a91db822c5ef42e391511c5d0.scope: Deactivated successfully.
Jan 27 08:27:32 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'crash'
Jan 27 08:27:33 compute-0 podman[74741]: 2026-01-27 08:27:33.018939385 +0000 UTC m=+0.026067957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:33 compute-0 podman[74741]: 2026-01-27 08:27:33.153343391 +0000 UTC m=+0.160471953 container create 436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715 (image=quay.io/ceph/ceph:v18, name=ecstatic_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:27:33 compute-0 systemd[1]: Started libpod-conmon-436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715.scope.
Jan 27 08:27:33 compute-0 ceph-mgr[74650]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 27 08:27:33 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'dashboard'
Jan 27 08:27:33 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:33.196+0000 7f1df8c1e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 27 08:27:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4418353fd1f41881950ee8749309c07417cd7acbd7b8760f15b3ee69033317/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4418353fd1f41881950ee8749309c07417cd7acbd7b8760f15b3ee69033317/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4418353fd1f41881950ee8749309c07417cd7acbd7b8760f15b3ee69033317/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:33 compute-0 podman[74741]: 2026-01-27 08:27:33.356768215 +0000 UTC m=+0.363896757 container init 436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715 (image=quay.io/ceph/ceph:v18, name=ecstatic_euler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:33 compute-0 podman[74741]: 2026-01-27 08:27:33.363008924 +0000 UTC m=+0.370137446 container start 436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715 (image=quay.io/ceph/ceph:v18, name=ecstatic_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:27:33 compute-0 podman[74741]: 2026-01-27 08:27:33.380730013 +0000 UTC m=+0.387858575 container attach 436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715 (image=quay.io/ceph/ceph:v18, name=ecstatic_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473640459' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]: 
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]: {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "health": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "status": "HEALTH_OK",
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "checks": {},
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "mutes": []
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "election_epoch": 5,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "quorum": [
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         0
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     ],
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "quorum_names": [
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "compute-0"
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     ],
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "quorum_age": 7,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "monmap": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "epoch": 1,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "min_mon_release_name": "reef",
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_mons": 1
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "osdmap": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "epoch": 1,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_osds": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_up_osds": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "osd_up_since": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_in_osds": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "osd_in_since": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_remapped_pgs": 0
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "pgmap": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "pgs_by_state": [],
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_pgs": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_pools": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_objects": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "data_bytes": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "bytes_used": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "bytes_avail": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "bytes_total": 0
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "fsmap": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "epoch": 1,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "by_rank": [],
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "up:standby": 0
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "mgrmap": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "available": false,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "num_standbys": 0,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "modules": [
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:             "iostat",
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:             "nfs",
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:             "restful"
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         ],
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "services": {}
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "servicemap": {
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "epoch": 1,
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:         "services": {}
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     },
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]:     "progress_events": {}
Jan 27 08:27:33 compute-0 ecstatic_euler[74757]: }
Jan 27 08:27:33 compute-0 systemd[1]: libpod-436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715.scope: Deactivated successfully.
Jan 27 08:27:33 compute-0 podman[74741]: 2026-01-27 08:27:33.764011973 +0000 UTC m=+0.771140505 container died 436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715 (image=quay.io/ceph/ceph:v18, name=ecstatic_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2473640459' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b4418353fd1f41881950ee8749309c07417cd7acbd7b8760f15b3ee69033317-merged.mount: Deactivated successfully.
Jan 27 08:27:34 compute-0 podman[74741]: 2026-01-27 08:27:34.002866565 +0000 UTC m=+1.009995087 container remove 436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715 (image=quay.io/ceph/ceph:v18, name=ecstatic_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:27:34 compute-0 systemd[1]: libpod-conmon-436c4949262d38d2658890840cc98be117337bf737f0f2df9b878851346a3715.scope: Deactivated successfully.
Jan 27 08:27:34 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'devicehealth'
Jan 27 08:27:34 compute-0 ceph-mgr[74650]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 27 08:27:34 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:34.943+0000 7f1df8c1e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 27 08:27:34 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'diskprediction_local'
Jan 27 08:27:35 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 27 08:27:35 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 27 08:27:35 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   from numpy import show_config as show_numpy_config
Jan 27 08:27:35 compute-0 ceph-mgr[74650]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 27 08:27:35 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'influx'
Jan 27 08:27:35 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:35.487+0000 7f1df8c1e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 27 08:27:35 compute-0 ceph-mgr[74650]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 27 08:27:35 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'insights'
Jan 27 08:27:35 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:35.724+0000 7f1df8c1e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 27 08:27:35 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'iostat'
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.102231735 +0000 UTC m=+0.068416822 container create 19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:36 compute-0 systemd[1]: Started libpod-conmon-19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2.scope.
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.067445624 +0000 UTC m=+0.033630761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33a7628679ecd0555bc025af24fb5d348fa232ab7c6a8d4b36dc1fb4ce70b0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33a7628679ecd0555bc025af24fb5d348fa232ab7c6a8d4b36dc1fb4ce70b0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33a7628679ecd0555bc025af24fb5d348fa232ab7c6a8d4b36dc1fb4ce70b0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:36 compute-0 ceph-mgr[74650]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 27 08:27:36 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'k8sevents'
Jan 27 08:27:36 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:36.182+0000 7f1df8c1e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.186096684 +0000 UTC m=+0.152281801 container init 19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.192007573 +0000 UTC m=+0.158192660 container start 19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.195249272 +0000 UTC m=+0.161434379 container attach 19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:27:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871150983' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:36 compute-0 funny_engelbart[74813]: 
Jan 27 08:27:36 compute-0 funny_engelbart[74813]: {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "health": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "status": "HEALTH_OK",
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "checks": {},
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "mutes": []
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "election_epoch": 5,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "quorum": [
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         0
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     ],
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "quorum_names": [
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "compute-0"
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     ],
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "quorum_age": 10,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "monmap": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "epoch": 1,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "min_mon_release_name": "reef",
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_mons": 1
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "osdmap": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "epoch": 1,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_osds": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_up_osds": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "osd_up_since": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_in_osds": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "osd_in_since": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_remapped_pgs": 0
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "pgmap": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "pgs_by_state": [],
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_pgs": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_pools": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_objects": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "data_bytes": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "bytes_used": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "bytes_avail": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "bytes_total": 0
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "fsmap": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "epoch": 1,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "by_rank": [],
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "up:standby": 0
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "mgrmap": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "available": false,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "num_standbys": 0,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "modules": [
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:             "iostat",
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:             "nfs",
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:             "restful"
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         ],
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "services": {}
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "servicemap": {
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "epoch": 1,
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:         "services": {}
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     },
Jan 27 08:27:36 compute-0 funny_engelbart[74813]:     "progress_events": {}
Jan 27 08:27:36 compute-0 funny_engelbart[74813]: }
Jan 27 08:27:36 compute-0 systemd[1]: libpod-19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2.scope: Deactivated successfully.
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.58955769 +0000 UTC m=+0.555742787 container died 19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f33a7628679ecd0555bc025af24fb5d348fa232ab7c6a8d4b36dc1fb4ce70b0b-merged.mount: Deactivated successfully.
Jan 27 08:27:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1871150983' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:36 compute-0 podman[74797]: 2026-01-27 08:27:36.631972447 +0000 UTC m=+0.598157534 container remove 19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:36 compute-0 systemd[1]: libpod-conmon-19b7be5e081f5bfbd57812ae278ee7d7cd8aa8848fdc901d7422314cade736f2.scope: Deactivated successfully.
Jan 27 08:27:37 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'localpool'
Jan 27 08:27:38 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'mds_autoscaler'
Jan 27 08:27:38 compute-0 podman[74851]: 2026-01-27 08:27:38.687325696 +0000 UTC m=+0.034842103 container create 04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3 (image=quay.io/ceph/ceph:v18, name=tender_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:38 compute-0 systemd[1]: Started libpod-conmon-04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3.scope.
Jan 27 08:27:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc8950a995a94dfe3d6c73738b1bcda70ebb1315b573e970f30a9cc1fdc962d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc8950a995a94dfe3d6c73738b1bcda70ebb1315b573e970f30a9cc1fdc962d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc8950a995a94dfe3d6c73738b1bcda70ebb1315b573e970f30a9cc1fdc962d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:38 compute-0 podman[74851]: 2026-01-27 08:27:38.753222799 +0000 UTC m=+0.100739216 container init 04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3 (image=quay.io/ceph/ceph:v18, name=tender_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:38 compute-0 podman[74851]: 2026-01-27 08:27:38.759660353 +0000 UTC m=+0.107176760 container start 04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3 (image=quay.io/ceph/ceph:v18, name=tender_wu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:38 compute-0 podman[74851]: 2026-01-27 08:27:38.763472686 +0000 UTC m=+0.110989113 container attach 04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3 (image=quay.io/ceph/ceph:v18, name=tender_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:27:38 compute-0 podman[74851]: 2026-01-27 08:27:38.670374888 +0000 UTC m=+0.017891315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:38 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'mirroring'
Jan 27 08:27:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:39 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487048202' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:39 compute-0 tender_wu[74868]: 
Jan 27 08:27:39 compute-0 tender_wu[74868]: {
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "health": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "status": "HEALTH_OK",
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "checks": {},
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "mutes": []
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "election_epoch": 5,
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "quorum": [
Jan 27 08:27:39 compute-0 tender_wu[74868]:         0
Jan 27 08:27:39 compute-0 tender_wu[74868]:     ],
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "quorum_names": [
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "compute-0"
Jan 27 08:27:39 compute-0 tender_wu[74868]:     ],
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "quorum_age": 12,
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "monmap": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "epoch": 1,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "min_mon_release_name": "reef",
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_mons": 1
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "osdmap": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "epoch": 1,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_osds": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_up_osds": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "osd_up_since": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_in_osds": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "osd_in_since": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_remapped_pgs": 0
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "pgmap": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "pgs_by_state": [],
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_pgs": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_pools": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_objects": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "data_bytes": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "bytes_used": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "bytes_avail": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "bytes_total": 0
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "fsmap": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "epoch": 1,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "by_rank": [],
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "up:standby": 0
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "mgrmap": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "available": false,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "num_standbys": 0,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "modules": [
Jan 27 08:27:39 compute-0 tender_wu[74868]:             "iostat",
Jan 27 08:27:39 compute-0 tender_wu[74868]:             "nfs",
Jan 27 08:27:39 compute-0 tender_wu[74868]:             "restful"
Jan 27 08:27:39 compute-0 tender_wu[74868]:         ],
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "services": {}
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "servicemap": {
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "epoch": 1,
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:39 compute-0 tender_wu[74868]:         "services": {}
Jan 27 08:27:39 compute-0 tender_wu[74868]:     },
Jan 27 08:27:39 compute-0 tender_wu[74868]:     "progress_events": {}
Jan 27 08:27:39 compute-0 tender_wu[74868]: }
Jan 27 08:27:39 compute-0 systemd[1]: libpod-04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3.scope: Deactivated successfully.
Jan 27 08:27:39 compute-0 podman[74851]: 2026-01-27 08:27:39.141117474 +0000 UTC m=+0.488633881 container died 04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3 (image=quay.io/ceph/ceph:v18, name=tender_wu, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecc8950a995a94dfe3d6c73738b1bcda70ebb1315b573e970f30a9cc1fdc962d-merged.mount: Deactivated successfully.
Jan 27 08:27:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1487048202' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:39 compute-0 podman[74851]: 2026-01-27 08:27:39.185754081 +0000 UTC m=+0.533270478 container remove 04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3 (image=quay.io/ceph/ceph:v18, name=tender_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 27 08:27:39 compute-0 systemd[1]: libpod-conmon-04a844bb4e11cf5b8be2755575355512e904e9935a19257ea7a0af7f008ff3d3.scope: Deactivated successfully.
Jan 27 08:27:39 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'nfs'
Jan 27 08:27:39 compute-0 ceph-mgr[74650]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 27 08:27:39 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'orchestrator'
Jan 27 08:27:39 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:39.946+0000 7f1df8c1e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 27 08:27:40 compute-0 ceph-mgr[74650]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 27 08:27:40 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'osd_perf_query'
Jan 27 08:27:40 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:40.658+0000 7f1df8c1e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 27 08:27:40 compute-0 ceph-mgr[74650]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 27 08:27:40 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'osd_support'
Jan 27 08:27:40 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:40.946+0000 7f1df8c1e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 27 08:27:41 compute-0 ceph-mgr[74650]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 27 08:27:41 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'pg_autoscaler'
Jan 27 08:27:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:41.204+0000 7f1df8c1e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.251155782 +0000 UTC m=+0.043611631 container create beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b (image=quay.io/ceph/ceph:v18, name=awesome_swirles, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:27:41 compute-0 systemd[1]: Started libpod-conmon-beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b.scope.
Jan 27 08:27:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a26d6b460636c10fbcf774aaf4ebfc566e982f3c2a9410b22571f9d47e0430/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a26d6b460636c10fbcf774aaf4ebfc566e982f3c2a9410b22571f9d47e0430/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a26d6b460636c10fbcf774aaf4ebfc566e982f3c2a9410b22571f9d47e0430/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.306398037 +0000 UTC m=+0.098853906 container init beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b (image=quay.io/ceph/ceph:v18, name=awesome_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.311212257 +0000 UTC m=+0.103668106 container start beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b (image=quay.io/ceph/ceph:v18, name=awesome_swirles, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.315586765 +0000 UTC m=+0.108042634 container attach beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b (image=quay.io/ceph/ceph:v18, name=awesome_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.232134658 +0000 UTC m=+0.024590557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:41 compute-0 ceph-mgr[74650]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 27 08:27:41 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'progress'
Jan 27 08:27:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:41.526+0000 7f1df8c1e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 27 08:27:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051714843' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:41 compute-0 awesome_swirles[74922]: 
Jan 27 08:27:41 compute-0 awesome_swirles[74922]: {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "health": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "status": "HEALTH_OK",
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "checks": {},
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "mutes": []
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "election_epoch": 5,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "quorum": [
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         0
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     ],
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "quorum_names": [
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "compute-0"
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     ],
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "quorum_age": 15,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "monmap": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "epoch": 1,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "min_mon_release_name": "reef",
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_mons": 1
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "osdmap": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "epoch": 1,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_osds": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_up_osds": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "osd_up_since": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_in_osds": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "osd_in_since": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_remapped_pgs": 0
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "pgmap": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "pgs_by_state": [],
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_pgs": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_pools": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_objects": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "data_bytes": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "bytes_used": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "bytes_avail": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "bytes_total": 0
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "fsmap": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "epoch": 1,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "by_rank": [],
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "up:standby": 0
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "mgrmap": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "available": false,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "num_standbys": 0,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "modules": [
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:             "iostat",
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:             "nfs",
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:             "restful"
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         ],
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "services": {}
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "servicemap": {
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "epoch": 1,
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:         "services": {}
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     },
Jan 27 08:27:41 compute-0 awesome_swirles[74922]:     "progress_events": {}
Jan 27 08:27:41 compute-0 awesome_swirles[74922]: }
Jan 27 08:27:41 compute-0 systemd[1]: libpod-beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b.scope: Deactivated successfully.
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.701440475 +0000 UTC m=+0.493896324 container died beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b (image=quay.io/ceph/ceph:v18, name=awesome_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-64a26d6b460636c10fbcf774aaf4ebfc566e982f3c2a9410b22571f9d47e0430-merged.mount: Deactivated successfully.
Jan 27 08:27:41 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3051714843' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:41 compute-0 podman[74906]: 2026-01-27 08:27:41.744925512 +0000 UTC m=+0.537381361 container remove beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b (image=quay.io/ceph/ceph:v18, name=awesome_swirles, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:27:41 compute-0 systemd[1]: libpod-conmon-beb03545a305853ba58707d41c103147ccd12fcc423eba6168fa2c27c0d1387b.scope: Deactivated successfully.
Jan 27 08:27:41 compute-0 ceph-mgr[74650]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 27 08:27:41 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'prometheus'
Jan 27 08:27:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:41.788+0000 7f1df8c1e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 27 08:27:42 compute-0 ceph-mgr[74650]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 27 08:27:42 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'rbd_support'
Jan 27 08:27:42 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:42.786+0000 7f1df8c1e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 27 08:27:43 compute-0 ceph-mgr[74650]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 27 08:27:43 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'restful'
Jan 27 08:27:43 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:43.099+0000 7f1df8c1e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 27 08:27:43 compute-0 podman[74963]: 2026-01-27 08:27:43.825994097 +0000 UTC m=+0.058614517 container create 8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0 (image=quay.io/ceph/ceph:v18, name=xenodochial_zhukovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:27:43 compute-0 systemd[1]: Started libpod-conmon-8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0.scope.
Jan 27 08:27:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cac59f620fec3607b219069cba5ef4d0e268b2ecc4dff89a0503db9b7248e7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cac59f620fec3607b219069cba5ef4d0e268b2ecc4dff89a0503db9b7248e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cac59f620fec3607b219069cba5ef4d0e268b2ecc4dff89a0503db9b7248e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:43 compute-0 podman[74963]: 2026-01-27 08:27:43.886300618 +0000 UTC m=+0.118920938 container init 8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0 (image=quay.io/ceph/ceph:v18, name=xenodochial_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:27:43 compute-0 podman[74963]: 2026-01-27 08:27:43.790956918 +0000 UTC m=+0.023577258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:43 compute-0 podman[74963]: 2026-01-27 08:27:43.891014755 +0000 UTC m=+0.123635085 container start 8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0 (image=quay.io/ceph/ceph:v18, name=xenodochial_zhukovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:43 compute-0 podman[74963]: 2026-01-27 08:27:43.894669484 +0000 UTC m=+0.127289924 container attach 8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0 (image=quay.io/ceph/ceph:v18, name=xenodochial_zhukovsky, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:27:43 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'rgw'
Jan 27 08:27:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995270386' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]: 
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]: {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "health": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "status": "HEALTH_OK",
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "checks": {},
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "mutes": []
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "election_epoch": 5,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "quorum": [
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         0
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     ],
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "quorum_names": [
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "compute-0"
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     ],
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "quorum_age": 17,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "monmap": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "epoch": 1,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "min_mon_release_name": "reef",
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_mons": 1
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "osdmap": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "epoch": 1,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_osds": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_up_osds": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "osd_up_since": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_in_osds": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "osd_in_since": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_remapped_pgs": 0
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "pgmap": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "pgs_by_state": [],
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_pgs": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_pools": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_objects": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "data_bytes": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "bytes_used": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "bytes_avail": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "bytes_total": 0
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "fsmap": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "epoch": 1,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "by_rank": [],
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "up:standby": 0
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "mgrmap": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "available": false,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "num_standbys": 0,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "modules": [
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:             "iostat",
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:             "nfs",
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:             "restful"
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         ],
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "services": {}
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "servicemap": {
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "epoch": 1,
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:         "services": {}
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     },
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]:     "progress_events": {}
Jan 27 08:27:44 compute-0 xenodochial_zhukovsky[74979]: }
Jan 27 08:27:44 compute-0 systemd[1]: libpod-8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0.scope: Deactivated successfully.
Jan 27 08:27:44 compute-0 podman[74963]: 2026-01-27 08:27:44.289053415 +0000 UTC m=+0.521673735 container died 8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0 (image=quay.io/ceph/ceph:v18, name=xenodochial_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 08:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-28cac59f620fec3607b219069cba5ef4d0e268b2ecc4dff89a0503db9b7248e7-merged.mount: Deactivated successfully.
Jan 27 08:27:44 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2995270386' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:44 compute-0 podman[74963]: 2026-01-27 08:27:44.330997359 +0000 UTC m=+0.563617679 container remove 8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0 (image=quay.io/ceph/ceph:v18, name=xenodochial_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:44 compute-0 systemd[1]: libpod-conmon-8f2baa90b836606f7144018a5ff1f599cd858984ab13c998db56e29cc09203f0.scope: Deactivated successfully.
Jan 27 08:27:44 compute-0 ceph-mgr[74650]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 27 08:27:44 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:44.642+0000 7f1df8c1e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 27 08:27:44 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'rook'
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.391119637 +0000 UTC m=+0.038002229 container create a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c (image=quay.io/ceph/ceph:v18, name=romantic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:27:46 compute-0 systemd[1]: Started libpod-conmon-a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c.scope.
Jan 27 08:27:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a25d01d6cfeb03fd358158811e0b730bf5ce55bf72f667575020c369164cb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a25d01d6cfeb03fd358158811e0b730bf5ce55bf72f667575020c369164cb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a25d01d6cfeb03fd358158811e0b730bf5ce55bf72f667575020c369164cb4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.37273334 +0000 UTC m=+0.019615952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.472478078 +0000 UTC m=+0.119360700 container init a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c (image=quay.io/ceph/ceph:v18, name=romantic_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.479671773 +0000 UTC m=+0.126554365 container start a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c (image=quay.io/ceph/ceph:v18, name=romantic_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.488917253 +0000 UTC m=+0.135799865 container attach a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c (image=quay.io/ceph/ceph:v18, name=romantic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:46 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732658526' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]: 
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]: {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "health": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "status": "HEALTH_OK",
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "checks": {},
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "mutes": []
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "election_epoch": 5,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "quorum": [
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         0
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     ],
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "quorum_names": [
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "compute-0"
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     ],
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "quorum_age": 20,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "monmap": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "epoch": 1,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "min_mon_release_name": "reef",
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_mons": 1
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "osdmap": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "epoch": 1,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_osds": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_up_osds": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "osd_up_since": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_in_osds": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "osd_in_since": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_remapped_pgs": 0
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "pgmap": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "pgs_by_state": [],
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_pgs": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_pools": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_objects": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "data_bytes": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "bytes_used": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "bytes_avail": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "bytes_total": 0
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "fsmap": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "epoch": 1,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "by_rank": [],
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "up:standby": 0
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "mgrmap": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "available": false,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "num_standbys": 0,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "modules": [
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:             "iostat",
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:             "nfs",
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:             "restful"
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         ],
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "services": {}
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "servicemap": {
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "epoch": 1,
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:         "services": {}
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     },
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]:     "progress_events": {}
Jan 27 08:27:46 compute-0 romantic_satoshi[75036]: }
Jan 27 08:27:46 compute-0 systemd[1]: libpod-a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c.scope: Deactivated successfully.
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.891131735 +0000 UTC m=+0.538014407 container died a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c (image=quay.io/ceph/ceph:v18, name=romantic_satoshi, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:46 compute-0 ceph-mgr[74650]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 27 08:27:46 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:46.919+0000 7f1df8c1e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 27 08:27:46 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'selftest'
Jan 27 08:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-89a25d01d6cfeb03fd358158811e0b730bf5ce55bf72f667575020c369164cb4-merged.mount: Deactivated successfully.
Jan 27 08:27:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1732658526' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:46 compute-0 podman[75021]: 2026-01-27 08:27:46.993164176 +0000 UTC m=+0.640046768 container remove a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c (image=quay.io/ceph/ceph:v18, name=romantic_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:47 compute-0 systemd[1]: libpod-conmon-a059e62317d581e872147fd3d02efb098e1ca86afd9a82e3ad226aab5680879c.scope: Deactivated successfully.
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'snap_schedule'
Jan 27 08:27:47 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:47.163+0000 7f1df8c1e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 27 08:27:47 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:47.423+0000 7f1df8c1e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'stats'
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'status'
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 27 08:27:47 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'telegraf'
Jan 27 08:27:47 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:47.951+0000 7f1df8c1e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 27 08:27:48 compute-0 ceph-mgr[74650]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 27 08:27:48 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'telemetry'
Jan 27 08:27:48 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:48.184+0000 7f1df8c1e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 27 08:27:48 compute-0 ceph-mgr[74650]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 27 08:27:48 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'test_orchestrator'
Jan 27 08:27:48 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:48.801+0000 7f1df8c1e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.071187608 +0000 UTC m=+0.049605914 container create af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e (image=quay.io/ceph/ceph:v18, name=dreamy_greider, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:27:49 compute-0 systemd[1]: Started libpod-conmon-af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e.scope.
Jan 27 08:27:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396f36ec3b7309ef5d16cd0b939d47efde41f017b55b3247e1cff94481974ca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396f36ec3b7309ef5d16cd0b939d47efde41f017b55b3247e1cff94481974ca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396f36ec3b7309ef5d16cd0b939d47efde41f017b55b3247e1cff94481974ca5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.045639167 +0000 UTC m=+0.024057503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.156394533 +0000 UTC m=+0.134812869 container init af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e (image=quay.io/ceph/ceph:v18, name=dreamy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.162306423 +0000 UTC m=+0.140724739 container start af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e (image=quay.io/ceph/ceph:v18, name=dreamy_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.175927311 +0000 UTC m=+0.154345667 container attach af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e (image=quay.io/ceph/ceph:v18, name=dreamy_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:49 compute-0 ceph-mgr[74650]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 27 08:27:49 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'volumes'
Jan 27 08:27:49 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:49.483+0000 7f1df8c1e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 27 08:27:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:49 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773481621' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:49 compute-0 dreamy_greider[75093]: 
Jan 27 08:27:49 compute-0 dreamy_greider[75093]: {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "health": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "status": "HEALTH_OK",
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "checks": {},
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "mutes": []
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "election_epoch": 5,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "quorum": [
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         0
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     ],
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "quorum_names": [
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "compute-0"
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     ],
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "quorum_age": 23,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "monmap": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "epoch": 1,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "min_mon_release_name": "reef",
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_mons": 1
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "osdmap": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "epoch": 1,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_osds": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_up_osds": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "osd_up_since": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_in_osds": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "osd_in_since": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_remapped_pgs": 0
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "pgmap": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "pgs_by_state": [],
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_pgs": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_pools": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_objects": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "data_bytes": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "bytes_used": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "bytes_avail": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "bytes_total": 0
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "fsmap": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "epoch": 1,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "by_rank": [],
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "up:standby": 0
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "mgrmap": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "available": false,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "num_standbys": 0,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "modules": [
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:             "iostat",
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:             "nfs",
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:             "restful"
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         ],
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "services": {}
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "servicemap": {
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "epoch": 1,
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:         "services": {}
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     },
Jan 27 08:27:49 compute-0 dreamy_greider[75093]:     "progress_events": {}
Jan 27 08:27:49 compute-0 dreamy_greider[75093]: }
Jan 27 08:27:49 compute-0 systemd[1]: libpod-af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e.scope: Deactivated successfully.
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.576356585 +0000 UTC m=+0.554774901 container died af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e (image=quay.io/ceph/ceph:v18, name=dreamy_greider, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:27:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1773481621' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-396f36ec3b7309ef5d16cd0b939d47efde41f017b55b3247e1cff94481974ca5-merged.mount: Deactivated successfully.
Jan 27 08:27:49 compute-0 podman[75076]: 2026-01-27 08:27:49.655934429 +0000 UTC m=+0.634352765 container remove af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e (image=quay.io/ceph/ceph:v18, name=dreamy_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:49 compute-0 systemd[1]: libpod-conmon-af3b4725feb14cf9e43783cfb14709481bbf73b96af3e8218030940b148cfc7e.scope: Deactivated successfully.
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'zabbix'
Jan 27 08:27:50 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:50.191+0000 7f1df8c1e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 27 08:27:50 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:50.424+0000 7f1df8c1e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: ms_deliver_dispatch: unhandled message 0x55e97f57cf20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vujqxq
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr handle_mgr_map Activating!
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr handle_mgr_map I am now activating
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vujqxq(active, starting, since 0.0143948s)
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e1 all = 1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vujqxq", "id": "compute-0.vujqxq"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vujqxq", "id": "compute-0.vujqxq"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: balancer
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [balancer INFO root] Starting
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Manager daemon compute-0.vujqxq is now available
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: crash
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:27:50
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [balancer INFO root] No pools available
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: devicehealth
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Starting
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: iostat
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: nfs
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: orchestrator
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: pg_autoscaler
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: progress
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [progress INFO root] Loading...
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [progress INFO root] No stored events to load
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [progress INFO root] Loaded [] historic events
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [progress INFO root] Loaded OSDMap, ready.
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] recovery thread starting
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] starting setup
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: rbd_support
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: restful
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/mirror_snapshot_schedule"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/mirror_snapshot_schedule"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: status
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [restful INFO root] server_addr: :: server_port: 8003
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: telemetry
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [restful WARNING root] server not running: no certificate configured
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] PerfHandler: starting
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TaskHandler: starting
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/trash_purge_schedule"} v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/trash_purge_schedule"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' 
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: [rbd_support INFO root] setup complete
Jan 27 08:27:50 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: volumes
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' 
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 27 08:27:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' 
Jan 27 08:27:50 compute-0 ceph-mon[74357]: Activating manager daemon compute-0.vujqxq
Jan 27 08:27:50 compute-0 ceph-mon[74357]: mgrmap e2: compute-0.vujqxq(active, starting, since 0.0143948s)
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vujqxq", "id": "compute-0.vujqxq"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: Manager daemon compute-0.vujqxq is now available
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/mirror_snapshot_schedule"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/trash_purge_schedule"}]: dispatch
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' 
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' 
Jan 27 08:27:50 compute-0 ceph-mon[74357]: from='mgr.14102 192.168.122.100:0/3937432060' entity='mgr.compute-0.vujqxq' 
Jan 27 08:27:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vujqxq(active, since 1.02918s)
Jan 27 08:27:51 compute-0 podman[75213]: 2026-01-27 08:27:51.731233986 +0000 UTC m=+0.051862674 container create fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b (image=quay.io/ceph/ceph:v18, name=happy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:27:51 compute-0 systemd[1]: Started libpod-conmon-fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b.scope.
Jan 27 08:27:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877423cb600f9a54a2abd5ee21e9a64107c553609ea431f0b0cb9e07f0722672/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877423cb600f9a54a2abd5ee21e9a64107c553609ea431f0b0cb9e07f0722672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877423cb600f9a54a2abd5ee21e9a64107c553609ea431f0b0cb9e07f0722672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:51 compute-0 podman[75213]: 2026-01-27 08:27:51.701832771 +0000 UTC m=+0.022461539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:51 compute-0 podman[75213]: 2026-01-27 08:27:51.797013577 +0000 UTC m=+0.117642265 container init fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b (image=quay.io/ceph/ceph:v18, name=happy_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:51 compute-0 podman[75213]: 2026-01-27 08:27:51.803758379 +0000 UTC m=+0.124387067 container start fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b (image=quay.io/ceph/ceph:v18, name=happy_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:27:51 compute-0 podman[75213]: 2026-01-27 08:27:51.81007966 +0000 UTC m=+0.130708378 container attach fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b (image=quay.io/ceph/ceph:v18, name=happy_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:27:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 27 08:27:52 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691954651' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:52 compute-0 happy_thompson[75229]: 
Jan 27 08:27:52 compute-0 happy_thompson[75229]: {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "health": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "status": "HEALTH_OK",
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "checks": {},
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "mutes": []
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "election_epoch": 5,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "quorum": [
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         0
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     ],
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "quorum_names": [
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "compute-0"
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     ],
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "quorum_age": 26,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "monmap": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "epoch": 1,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "min_mon_release_name": "reef",
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_mons": 1
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "osdmap": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "epoch": 1,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_osds": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_up_osds": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "osd_up_since": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_in_osds": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "osd_in_since": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_remapped_pgs": 0
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "pgmap": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "pgs_by_state": [],
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_pgs": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_pools": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_objects": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "data_bytes": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "bytes_used": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "bytes_avail": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "bytes_total": 0
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "fsmap": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "epoch": 1,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "by_rank": [],
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "up:standby": 0
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "mgrmap": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "available": true,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "num_standbys": 0,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "modules": [
Jan 27 08:27:52 compute-0 happy_thompson[75229]:             "iostat",
Jan 27 08:27:52 compute-0 happy_thompson[75229]:             "nfs",
Jan 27 08:27:52 compute-0 happy_thompson[75229]:             "restful"
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         ],
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "services": {}
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "servicemap": {
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "epoch": 1,
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "modified": "2026-01-27T08:27:23.062023+0000",
Jan 27 08:27:52 compute-0 happy_thompson[75229]:         "services": {}
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     },
Jan 27 08:27:52 compute-0 happy_thompson[75229]:     "progress_events": {}
Jan 27 08:27:52 compute-0 happy_thompson[75229]: }
Jan 27 08:27:52 compute-0 systemd[1]: libpod-fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b.scope: Deactivated successfully.
Jan 27 08:27:52 compute-0 podman[75213]: 2026-01-27 08:27:52.393413763 +0000 UTC m=+0.714042461 container died fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b (image=quay.io/ceph/ceph:v18, name=happy_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 08:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-877423cb600f9a54a2abd5ee21e9a64107c553609ea431f0b0cb9e07f0722672-merged.mount: Deactivated successfully.
Jan 27 08:27:52 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:27:52 compute-0 ceph-mon[74357]: mgrmap e3: compute-0.vujqxq(active, since 1.02918s)
Jan 27 08:27:52 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2691954651' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 27 08:27:52 compute-0 podman[75213]: 2026-01-27 08:27:52.463606522 +0000 UTC m=+0.784235210 container remove fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b (image=quay.io/ceph/ceph:v18, name=happy_thompson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:52 compute-0 systemd[1]: libpod-conmon-fe54e251405faa54e20b52a508e5b4625bc09f29f967e794e9504d8283bb163b.scope: Deactivated successfully.
Jan 27 08:27:52 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vujqxq(active, since 2s)
Jan 27 08:27:52 compute-0 podman[75270]: 2026-01-27 08:27:52.533871213 +0000 UTC m=+0.051461843 container create 951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374 (image=quay.io/ceph/ceph:v18, name=funny_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:27:52 compute-0 podman[75270]: 2026-01-27 08:27:52.505141306 +0000 UTC m=+0.022731996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:52 compute-0 systemd[1]: Started libpod-conmon-951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374.scope.
Jan 27 08:27:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eaed90ca79d5bbf034ab35dadbd81ec1129ed6d58fac00776764d9012ee1626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eaed90ca79d5bbf034ab35dadbd81ec1129ed6d58fac00776764d9012ee1626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eaed90ca79d5bbf034ab35dadbd81ec1129ed6d58fac00776764d9012ee1626/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eaed90ca79d5bbf034ab35dadbd81ec1129ed6d58fac00776764d9012ee1626/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:52 compute-0 podman[75270]: 2026-01-27 08:27:52.682290979 +0000 UTC m=+0.199881659 container init 951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374 (image=quay.io/ceph/ceph:v18, name=funny_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:52 compute-0 podman[75270]: 2026-01-27 08:27:52.688591029 +0000 UTC m=+0.206181629 container start 951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374 (image=quay.io/ceph/ceph:v18, name=funny_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:52 compute-0 podman[75270]: 2026-01-27 08:27:52.700791389 +0000 UTC m=+0.218382079 container attach 951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374 (image=quay.io/ceph/ceph:v18, name=funny_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 27 08:27:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 27 08:27:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1355529477' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 27 08:27:53 compute-0 systemd[1]: libpod-951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374.scope: Deactivated successfully.
Jan 27 08:27:53 compute-0 podman[75270]: 2026-01-27 08:27:53.20167787 +0000 UTC m=+0.719268470 container died 951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374 (image=quay.io/ceph/ceph:v18, name=funny_dhawan, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eaed90ca79d5bbf034ab35dadbd81ec1129ed6d58fac00776764d9012ee1626-merged.mount: Deactivated successfully.
Jan 27 08:27:53 compute-0 podman[75270]: 2026-01-27 08:27:53.276378012 +0000 UTC m=+0.793968612 container remove 951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374 (image=quay.io/ceph/ceph:v18, name=funny_dhawan, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:53 compute-0 systemd[1]: libpod-conmon-951223e8c0f67b492b7f44aabf774ac484e604e722b7396e76d9513e64ca5374.scope: Deactivated successfully.
Jan 27 08:27:53 compute-0 podman[75325]: 2026-01-27 08:27:53.333675122 +0000 UTC m=+0.038740409 container create 1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86 (image=quay.io/ceph/ceph:v18, name=eloquent_ganguly, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:53 compute-0 systemd[1]: Started libpod-conmon-1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86.scope.
Jan 27 08:27:53 compute-0 podman[75325]: 2026-01-27 08:27:53.316508267 +0000 UTC m=+0.021573574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5161a441709d5f848060600bbbcdd06ba61c2a8d1976af2cc506c173fffeae29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5161a441709d5f848060600bbbcdd06ba61c2a8d1976af2cc506c173fffeae29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5161a441709d5f848060600bbbcdd06ba61c2a8d1976af2cc506c173fffeae29/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:53 compute-0 podman[75325]: 2026-01-27 08:27:53.446947566 +0000 UTC m=+0.152012873 container init 1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86 (image=quay.io/ceph/ceph:v18, name=eloquent_ganguly, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:53 compute-0 podman[75325]: 2026-01-27 08:27:53.451697515 +0000 UTC m=+0.156762822 container start 1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86 (image=quay.io/ceph/ceph:v18, name=eloquent_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:53 compute-0 podman[75325]: 2026-01-27 08:27:53.456958847 +0000 UTC m=+0.162024144 container attach 1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86 (image=quay.io/ceph/ceph:v18, name=eloquent_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:53 compute-0 ceph-mon[74357]: mgrmap e4: compute-0.vujqxq(active, since 2s)
Jan 27 08:27:53 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1355529477' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 27 08:27:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 27 08:27:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3212939807' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:27:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3212939807' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 27 08:27:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3212939807' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  1: '-n'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  2: 'mgr.compute-0.vujqxq'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  3: '-f'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  4: '--setuser'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  5: 'ceph'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  6: '--setgroup'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  7: 'ceph'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  8: '--default-log-to-file=false'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  9: '--default-log-to-journald=true'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr respawn  exe_path /proc/self/exe
Jan 27 08:27:54 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vujqxq(active, since 4s)
Jan 27 08:27:54 compute-0 systemd[1]: libpod-1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86.scope: Deactivated successfully.
Jan 27 08:27:54 compute-0 podman[75325]: 2026-01-27 08:27:54.655117924 +0000 UTC m=+1.360183251 container died 1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86 (image=quay.io/ceph/ceph:v18, name=eloquent_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:54 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: ignoring --setuser ceph since I am not root
Jan 27 08:27:54 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: ignoring --setgroup ceph since I am not root
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: pidfile_write: ignore empty --pid-file
Jan 27 08:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5161a441709d5f848060600bbbcdd06ba61c2a8d1976af2cc506c173fffeae29-merged.mount: Deactivated successfully.
Jan 27 08:27:54 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'alerts'
Jan 27 08:27:55 compute-0 podman[75325]: 2026-01-27 08:27:55.035298051 +0000 UTC m=+1.740363338 container remove 1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86 (image=quay.io/ceph/ceph:v18, name=eloquent_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:27:55 compute-0 podman[75404]: 2026-01-27 08:27:55.167323532 +0000 UTC m=+0.108948158 container create 57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c (image=quay.io/ceph/ceph:v18, name=peaceful_curran, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:27:55 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:55.167+0000 7fe15fc47140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 27 08:27:55 compute-0 ceph-mgr[74650]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 27 08:27:55 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'balancer'
Jan 27 08:27:55 compute-0 podman[75404]: 2026-01-27 08:27:55.08667181 +0000 UTC m=+0.028296466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:55 compute-0 systemd[1]: Started libpod-conmon-57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c.scope.
Jan 27 08:27:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e70cb16f16a3576adb5a61250a68d84256f527c4d57df9219346f68134f3e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e70cb16f16a3576adb5a61250a68d84256f527c4d57df9219346f68134f3e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e70cb16f16a3576adb5a61250a68d84256f527c4d57df9219346f68134f3e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:55 compute-0 podman[75404]: 2026-01-27 08:27:55.356875261 +0000 UTC m=+0.298499947 container init 57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c (image=quay.io/ceph/ceph:v18, name=peaceful_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:55 compute-0 podman[75404]: 2026-01-27 08:27:55.367094457 +0000 UTC m=+0.308719103 container start 57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c (image=quay.io/ceph/ceph:v18, name=peaceful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:27:55 compute-0 ceph-mgr[74650]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 27 08:27:55 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:55.431+0000 7fe15fc47140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 27 08:27:55 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'cephadm'
Jan 27 08:27:55 compute-0 podman[75404]: 2026-01-27 08:27:55.521477764 +0000 UTC m=+0.463102410 container attach 57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c (image=quay.io/ceph/ceph:v18, name=peaceful_curran, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 08:27:55 compute-0 systemd[1]: libpod-conmon-1d0e161e0741f40777f06a15628402d597305e1ecbb6e1fb731b877b540aef86.scope: Deactivated successfully.
Jan 27 08:27:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3212939807' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 27 08:27:55 compute-0 ceph-mon[74357]: mgrmap e5: compute-0.vujqxq(active, since 4s)
Jan 27 08:27:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 27 08:27:55 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168115137' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 27 08:27:55 compute-0 peaceful_curran[75421]: {
Jan 27 08:27:55 compute-0 peaceful_curran[75421]:     "epoch": 5,
Jan 27 08:27:55 compute-0 peaceful_curran[75421]:     "available": true,
Jan 27 08:27:55 compute-0 peaceful_curran[75421]:     "active_name": "compute-0.vujqxq",
Jan 27 08:27:55 compute-0 peaceful_curran[75421]:     "num_standby": 0
Jan 27 08:27:55 compute-0 peaceful_curran[75421]: }
Jan 27 08:27:55 compute-0 systemd[1]: libpod-57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c.scope: Deactivated successfully.
Jan 27 08:27:55 compute-0 podman[75404]: 2026-01-27 08:27:55.98372719 +0000 UTC m=+0.925351806 container died 57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c (image=quay.io/ceph/ceph:v18, name=peaceful_curran, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-08e70cb16f16a3576adb5a61250a68d84256f527c4d57df9219346f68134f3e2-merged.mount: Deactivated successfully.
Jan 27 08:27:56 compute-0 podman[75404]: 2026-01-27 08:27:56.502874997 +0000 UTC m=+1.444499613 container remove 57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c (image=quay.io/ceph/ceph:v18, name=peaceful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:56 compute-0 systemd[1]: libpod-conmon-57c92c1903eb1035160a8b0e6c5ef12bb91e18938074a737ff20dcb9ad31a42c.scope: Deactivated successfully.
Jan 27 08:27:56 compute-0 podman[75460]: 2026-01-27 08:27:56.550461274 +0000 UTC m=+0.026616481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:27:56 compute-0 podman[75460]: 2026-01-27 08:27:56.780733354 +0000 UTC m=+0.256888591 container create edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df (image=quay.io/ceph/ceph:v18, name=boring_noyce, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:27:56 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3168115137' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 27 08:27:57 compute-0 systemd[1]: Started libpod-conmon-edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df.scope.
Jan 27 08:27:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69ea8046124d25040b8f4104652f7fabe49c832bdeffffd0b9bdb5315e3eaa1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69ea8046124d25040b8f4104652f7fabe49c832bdeffffd0b9bdb5315e3eaa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69ea8046124d25040b8f4104652f7fabe49c832bdeffffd0b9bdb5315e3eaa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:57 compute-0 podman[75460]: 2026-01-27 08:27:57.12429544 +0000 UTC m=+0.600450647 container init edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df (image=quay.io/ceph/ceph:v18, name=boring_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:57 compute-0 podman[75460]: 2026-01-27 08:27:57.134262929 +0000 UTC m=+0.610418156 container start edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df (image=quay.io/ceph/ceph:v18, name=boring_noyce, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:27:57 compute-0 podman[75460]: 2026-01-27 08:27:57.30727165 +0000 UTC m=+0.783426857 container attach edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df (image=quay.io/ceph/ceph:v18, name=boring_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:27:57 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'crash'
Jan 27 08:27:57 compute-0 ceph-mgr[74650]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 27 08:27:57 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:57.820+0000 7fe15fc47140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 27 08:27:57 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'dashboard'
Jan 27 08:27:59 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'devicehealth'
Jan 27 08:27:59 compute-0 ceph-mgr[74650]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 27 08:27:59 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:27:59.626+0000 7fe15fc47140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 27 08:27:59 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'diskprediction_local'
Jan 27 08:28:00 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 27 08:28:00 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 27 08:28:00 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   from numpy import show_config as show_numpy_config
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 27 08:28:00 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:00.188+0000 7fe15fc47140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'influx'
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 27 08:28:00 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:00.437+0000 7fe15fc47140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'insights'
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'iostat'
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 27 08:28:00 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:00.926+0000 7fe15fc47140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 27 08:28:00 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'k8sevents'
Jan 27 08:28:02 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'localpool'
Jan 27 08:28:02 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'mds_autoscaler'
Jan 27 08:28:03 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'mirroring'
Jan 27 08:28:03 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'nfs'
Jan 27 08:28:04 compute-0 ceph-mgr[74650]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 27 08:28:04 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:04.593+0000 7fe15fc47140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 27 08:28:04 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'orchestrator'
Jan 27 08:28:05 compute-0 ceph-mgr[74650]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 27 08:28:05 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:05.251+0000 7fe15fc47140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 27 08:28:05 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'osd_perf_query'
Jan 27 08:28:05 compute-0 ceph-mgr[74650]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 27 08:28:05 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:05.505+0000 7fe15fc47140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 27 08:28:05 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'osd_support'
Jan 27 08:28:05 compute-0 ceph-mgr[74650]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 27 08:28:05 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:05.752+0000 7fe15fc47140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 27 08:28:05 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'pg_autoscaler'
Jan 27 08:28:06 compute-0 ceph-mgr[74650]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 27 08:28:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:06.028+0000 7fe15fc47140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 27 08:28:06 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'progress'
Jan 27 08:28:06 compute-0 ceph-mgr[74650]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 27 08:28:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:06.261+0000 7fe15fc47140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 27 08:28:06 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'prometheus'
Jan 27 08:28:07 compute-0 ceph-mgr[74650]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 27 08:28:07 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:07.287+0000 7fe15fc47140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 27 08:28:07 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'rbd_support'
Jan 27 08:28:07 compute-0 ceph-mgr[74650]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 27 08:28:07 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:07.620+0000 7fe15fc47140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 27 08:28:07 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'restful'
Jan 27 08:28:08 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'rgw'
Jan 27 08:28:09 compute-0 ceph-mgr[74650]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 27 08:28:09 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:09.186+0000 7fe15fc47140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 27 08:28:09 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'rook'
Jan 27 08:28:11 compute-0 ceph-mgr[74650]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 27 08:28:11 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:11.341+0000 7fe15fc47140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 27 08:28:11 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'selftest'
Jan 27 08:28:11 compute-0 ceph-mgr[74650]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 27 08:28:11 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:11.606+0000 7fe15fc47140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 27 08:28:11 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'snap_schedule'
Jan 27 08:28:11 compute-0 ceph-mgr[74650]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 27 08:28:11 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:11.888+0000 7fe15fc47140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 27 08:28:11 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'stats'
Jan 27 08:28:12 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'status'
Jan 27 08:28:12 compute-0 ceph-mgr[74650]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 27 08:28:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:12.397+0000 7fe15fc47140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 27 08:28:12 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'telegraf'
Jan 27 08:28:12 compute-0 ceph-mgr[74650]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 27 08:28:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:12.645+0000 7fe15fc47140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 27 08:28:12 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'telemetry'
Jan 27 08:28:13 compute-0 ceph-mgr[74650]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 27 08:28:13 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:13.228+0000 7fe15fc47140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 27 08:28:13 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'test_orchestrator'
Jan 27 08:28:13 compute-0 ceph-mgr[74650]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 27 08:28:13 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:13.912+0000 7fe15fc47140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 27 08:28:13 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'volumes'
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 27 08:28:14 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:14.673+0000 7fe15fc47140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr[py] Loading python module 'zabbix'
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 27 08:28:14 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:28:14.904+0000 7fe15fc47140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vujqxq restarted
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vujqxq
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: ms_deliver_dispatch: unhandled message 0x55657db8c420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr handle_mgr_map Activating!
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr handle_mgr_map I am now activating
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vujqxq(active, starting, since 0.0192098s)
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vujqxq", "id": "compute-0.vujqxq"} v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vujqxq", "id": "compute-0.vujqxq"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e1 all = 1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: balancer
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Manager daemon compute-0.vujqxq is now available
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Starting
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:28:14
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [balancer INFO root] No pools available
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: Active manager daemon compute-0.vujqxq restarted
Jan 27 08:28:14 compute-0 ceph-mon[74357]: Activating manager daemon compute-0.vujqxq
Jan 27 08:28:14 compute-0 ceph-mon[74357]: osdmap e2: 0 total, 0 up, 0 in
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mgrmap e6: compute-0.vujqxq(active, starting, since 0.0192098s)
Jan 27 08:28:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vujqxq", "id": "compute-0.vujqxq"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mon[74357]: Manager daemon compute-0.vujqxq is now available
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: cephadm
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: crash
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: devicehealth
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Starting
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: iostat
Jan 27 08:28:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 27 08:28:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: nfs
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:14 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: orchestrator
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: pg_autoscaler
Jan 27 08:28:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 27 08:28:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: progress
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [progress INFO root] Loading...
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [progress INFO root] No stored events to load
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [progress INFO root] Loaded [] historic events
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [progress INFO root] Loaded OSDMap, ready.
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] recovery thread starting
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] starting setup
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: rbd_support
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: restful
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: status
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [restful INFO root] server_addr: :: server_port: 8003
Jan 27 08:28:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/mirror_snapshot_schedule"} v 0) v1
Jan 27 08:28:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/mirror_snapshot_schedule"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: telemetry
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] PerfHandler: starting
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TaskHandler: starting
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [restful WARNING root] server not running: no certificate configured
Jan 27 08:28:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/trash_purge_schedule"} v 0) v1
Jan 27 08:28:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/trash_purge_schedule"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] setup complete
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: mgr load Constructed class from module: volumes
Jan 27 08:28:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 27 08:28:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 27 08:28:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:15 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vujqxq(active, since 1.02684s)
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 27 08:28:15 compute-0 boring_noyce[75487]: {
Jan 27 08:28:15 compute-0 boring_noyce[75487]:     "mgrmap_epoch": 7,
Jan 27 08:28:15 compute-0 boring_noyce[75487]:     "initialized": true
Jan 27 08:28:15 compute-0 boring_noyce[75487]: }
Jan 27 08:28:15 compute-0 systemd[1]: libpod-edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df.scope: Deactivated successfully.
Jan 27 08:28:15 compute-0 podman[75460]: 2026-01-27 08:28:15.956811514 +0000 UTC m=+19.432966731 container died edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df (image=quay.io/ceph/ceph:v18, name=boring_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69ea8046124d25040b8f4104652f7fabe49c832bdeffffd0b9bdb5315e3eaa1-merged.mount: Deactivated successfully.
Jan 27 08:28:15 compute-0 ceph-mon[74357]: Found migration_current of "None". Setting to last migration.
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/mirror_snapshot_schedule"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vujqxq/trash_purge_schedule"}]: dispatch
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:15 compute-0 ceph-mon[74357]: mgrmap e7: compute-0.vujqxq(active, since 1.02684s)
Jan 27 08:28:16 compute-0 podman[75460]: 2026-01-27 08:28:16.018903564 +0000 UTC m=+19.495058791 container remove edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df (image=quay.io/ceph/ceph:v18, name=boring_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:16 compute-0 systemd[1]: libpod-conmon-edf501ec50fcf49300915df3d5b3ed35b4ae69b4196697bac7e2db5aa9c9a1df.scope: Deactivated successfully.
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.076005899 +0000 UTC m=+0.038588625 container create 5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4 (image=quay.io/ceph/ceph:v18, name=competent_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:28:16 compute-0 systemd[1]: Started libpod-conmon-5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4.scope.
Jan 27 08:28:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2b042abbd83c39376f2630b6a235d4340dac7a05f184ac2a72e4f605011104/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2b042abbd83c39376f2630b6a235d4340dac7a05f184ac2a72e4f605011104/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2b042abbd83c39376f2630b6a235d4340dac7a05f184ac2a72e4f605011104/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.057244602 +0000 UTC m=+0.019827298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.162936871 +0000 UTC m=+0.125519567 container init 5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4 (image=quay.io/ceph/ceph:v18, name=competent_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.167399871 +0000 UTC m=+0.129982567 container start 5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4 (image=quay.io/ceph/ceph:v18, name=competent_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.170371282 +0000 UTC m=+0.132954028 container attach 5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4 (image=quay.io/ceph/ceph:v18, name=competent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920039 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: [cephadm INFO cherrypy.error] [27/Jan/2026:08:28:16] ENGINE Bus STARTING
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : [27/Jan/2026:08:28:16] ENGINE Bus STARTING
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: [cephadm INFO cherrypy.error] [27/Jan/2026:08:28:16] ENGINE Serving on http://192.168.122.100:8765
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : [27/Jan/2026:08:28:16] ENGINE Serving on http://192.168.122.100:8765
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 27 08:28:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 27 08:28:16 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: [cephadm INFO cherrypy.error] [27/Jan/2026:08:28:16] ENGINE Serving on https://192.168.122.100:7150
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : [27/Jan/2026:08:28:16] ENGINE Serving on https://192.168.122.100:7150
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: [cephadm INFO cherrypy.error] [27/Jan/2026:08:28:16] ENGINE Bus STARTED
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : [27/Jan/2026:08:28:16] ENGINE Bus STARTED
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: [cephadm INFO cherrypy.error] [27/Jan/2026:08:28:16] ENGINE Client ('192.168.122.100', 51278) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 27 08:28:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 27 08:28:16 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : [27/Jan/2026:08:28:16] ENGINE Client ('192.168.122.100', 51278) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 27 08:28:16 compute-0 systemd[1]: libpod-5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4.scope: Deactivated successfully.
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.721026921 +0000 UTC m=+0.683609617 container died 5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4 (image=quay.io/ceph/ceph:v18, name=competent_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f2b042abbd83c39376f2630b6a235d4340dac7a05f184ac2a72e4f605011104-merged.mount: Deactivated successfully.
Jan 27 08:28:16 compute-0 podman[75637]: 2026-01-27 08:28:16.763328925 +0000 UTC m=+0.725911651 container remove 5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4 (image=quay.io/ceph/ceph:v18, name=competent_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:28:16 compute-0 systemd[1]: libpod-conmon-5b0f658d25e10962cdfa2681b030aba3927412742c7777374e40cb33b9db3cd4.scope: Deactivated successfully.
Jan 27 08:28:16 compute-0 podman[75713]: 2026-01-27 08:28:16.821303184 +0000 UTC m=+0.041596057 container create d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab (image=quay.io/ceph/ceph:v18, name=elastic_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:16 compute-0 systemd[1]: Started libpod-conmon-d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab.scope.
Jan 27 08:28:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e07c1d8d5f5c1e1fc86b4cde392fc705dce315ef6464c57fea232bf765aa721/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e07c1d8d5f5c1e1fc86b4cde392fc705dce315ef6464c57fea232bf765aa721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e07c1d8d5f5c1e1fc86b4cde392fc705dce315ef6464c57fea232bf765aa721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:16 compute-0 podman[75713]: 2026-01-27 08:28:16.879626502 +0000 UTC m=+0.099919375 container init d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab (image=quay.io/ceph/ceph:v18, name=elastic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:28:16 compute-0 podman[75713]: 2026-01-27 08:28:16.888694007 +0000 UTC m=+0.108986880 container start d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab (image=quay.io/ceph/ceph:v18, name=elastic_galileo, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:16 compute-0 podman[75713]: 2026-01-27 08:28:16.892555561 +0000 UTC m=+0.112848464 container attach d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab (image=quay.io/ceph/ceph:v18, name=elastic_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 08:28:16 compute-0 podman[75713]: 2026-01-27 08:28:16.800053779 +0000 UTC m=+0.020346702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:16 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 27 08:28:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: [cephadm INFO root] Set ssh ssh_user
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 27 08:28:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 27 08:28:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: [cephadm INFO root] Set ssh ssh_config
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 27 08:28:17 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 27 08:28:17 compute-0 elastic_galileo[75730]: ssh user set to ceph-admin. sudo will be used
Jan 27 08:28:17 compute-0 systemd[1]: libpod-d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab.scope: Deactivated successfully.
Jan 27 08:28:17 compute-0 podman[75713]: 2026-01-27 08:28:17.500590072 +0000 UTC m=+0.720882985 container died d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab (image=quay.io/ceph/ceph:v18, name=elastic_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e07c1d8d5f5c1e1fc86b4cde392fc705dce315ef6464c57fea232bf765aa721-merged.mount: Deactivated successfully.
Jan 27 08:28:17 compute-0 podman[75713]: 2026-01-27 08:28:17.664432255 +0000 UTC m=+0.884725168 container remove d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab (image=quay.io/ceph/ceph:v18, name=elastic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:28:17 compute-0 systemd[1]: libpod-conmon-d587c7108d6805489902fd6efa89cf51faee76486d50475519b72bfe2b38b9ab.scope: Deactivated successfully.
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 27 08:28:17 compute-0 ceph-mon[74357]: [27/Jan/2026:08:28:16] ENGINE Bus STARTING
Jan 27 08:28:17 compute-0 ceph-mon[74357]: [27/Jan/2026:08:28:16] ENGINE Serving on http://192.168.122.100:8765
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:17 compute-0 ceph-mon[74357]: [27/Jan/2026:08:28:16] ENGINE Serving on https://192.168.122.100:7150
Jan 27 08:28:17 compute-0 ceph-mon[74357]: [27/Jan/2026:08:28:16] ENGINE Bus STARTED
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:17 compute-0 ceph-mon[74357]: [27/Jan/2026:08:28:16] ENGINE Client ('192.168.122.100', 51278) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:17 compute-0 podman[75768]: 2026-01-27 08:28:17.765841099 +0000 UTC m=+0.068436443 container create 39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5 (image=quay.io/ceph/ceph:v18, name=dreamy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vujqxq(active, since 2s)
Jan 27 08:28:17 compute-0 systemd[1]: Started libpod-conmon-39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5.scope.
Jan 27 08:28:17 compute-0 podman[75768]: 2026-01-27 08:28:17.737942334 +0000 UTC m=+0.040537688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410fb2a0ef1700bdd7f9068a0d38becdcb72538877b5008e21f80edd8c2d40df/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410fb2a0ef1700bdd7f9068a0d38becdcb72538877b5008e21f80edd8c2d40df/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410fb2a0ef1700bdd7f9068a0d38becdcb72538877b5008e21f80edd8c2d40df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410fb2a0ef1700bdd7f9068a0d38becdcb72538877b5008e21f80edd8c2d40df/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410fb2a0ef1700bdd7f9068a0d38becdcb72538877b5008e21f80edd8c2d40df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:17 compute-0 podman[75768]: 2026-01-27 08:28:17.856976024 +0000 UTC m=+0.159571378 container init 39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5 (image=quay.io/ceph/ceph:v18, name=dreamy_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:17 compute-0 podman[75768]: 2026-01-27 08:28:17.86790482 +0000 UTC m=+0.170500144 container start 39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5 (image=quay.io/ceph/ceph:v18, name=dreamy_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:17 compute-0 podman[75768]: 2026-01-27 08:28:17.871250721 +0000 UTC m=+0.173846045 container attach 39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5 (image=quay.io/ceph/ceph:v18, name=dreamy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:18 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 27 08:28:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:18 compute-0 ceph-mgr[74650]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 27 08:28:18 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 27 08:28:18 compute-0 ceph-mgr[74650]: [cephadm INFO root] Set ssh private key
Jan 27 08:28:18 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 27 08:28:18 compute-0 systemd[1]: libpod-39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5.scope: Deactivated successfully.
Jan 27 08:28:18 compute-0 podman[75768]: 2026-01-27 08:28:18.687140705 +0000 UTC m=+0.989736019 container died 39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5 (image=quay.io/ceph/ceph:v18, name=dreamy_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:18 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:18 compute-0 ceph-mon[74357]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:18 compute-0 ceph-mon[74357]: Set ssh ssh_user
Jan 27 08:28:18 compute-0 ceph-mon[74357]: Set ssh ssh_config
Jan 27 08:28:18 compute-0 ceph-mon[74357]: ssh user set to ceph-admin. sudo will be used
Jan 27 08:28:18 compute-0 ceph-mon[74357]: mgrmap e8: compute-0.vujqxq(active, since 2s)
Jan 27 08:28:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-410fb2a0ef1700bdd7f9068a0d38becdcb72538877b5008e21f80edd8c2d40df-merged.mount: Deactivated successfully.
Jan 27 08:28:19 compute-0 podman[75768]: 2026-01-27 08:28:19.318850996 +0000 UTC m=+1.621446310 container remove 39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5 (image=quay.io/ceph/ceph:v18, name=dreamy_ritchie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:19 compute-0 systemd[1]: libpod-conmon-39ea0dcd4a1a8f8bbca87ddc9577da308214d0addd6bad874332c5481c6a52d5.scope: Deactivated successfully.
Jan 27 08:28:19 compute-0 podman[75821]: 2026-01-27 08:28:19.362269371 +0000 UTC m=+0.024441443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:19 compute-0 podman[75821]: 2026-01-27 08:28:19.456797509 +0000 UTC m=+0.118969521 container create eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c (image=quay.io/ceph/ceph:v18, name=wizardly_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:28:19 compute-0 systemd[1]: Started libpod-conmon-eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c.scope.
Jan 27 08:28:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58369d5af263832a820dd0fc4025e2eb57224b8d6e524648b15565f81d361787/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58369d5af263832a820dd0fc4025e2eb57224b8d6e524648b15565f81d361787/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58369d5af263832a820dd0fc4025e2eb57224b8d6e524648b15565f81d361787/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58369d5af263832a820dd0fc4025e2eb57224b8d6e524648b15565f81d361787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58369d5af263832a820dd0fc4025e2eb57224b8d6e524648b15565f81d361787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:19 compute-0 podman[75821]: 2026-01-27 08:28:19.535278112 +0000 UTC m=+0.197450144 container init eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c (image=quay.io/ceph/ceph:v18, name=wizardly_archimedes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:19 compute-0 podman[75821]: 2026-01-27 08:28:19.541756537 +0000 UTC m=+0.203928549 container start eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c (image=quay.io/ceph/ceph:v18, name=wizardly_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:28:19 compute-0 podman[75821]: 2026-01-27 08:28:19.548942151 +0000 UTC m=+0.211114143 container attach eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c (image=quay.io/ceph/ceph:v18, name=wizardly_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:19 compute-0 ceph-mon[74357]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:19 compute-0 ceph-mon[74357]: Set ssh ssh_identity_key
Jan 27 08:28:19 compute-0 ceph-mon[74357]: Set ssh private key
Jan 27 08:28:20 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 27 08:28:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:20 compute-0 ceph-mgr[74650]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 27 08:28:20 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 27 08:28:20 compute-0 systemd[1]: libpod-eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c.scope: Deactivated successfully.
Jan 27 08:28:20 compute-0 podman[75821]: 2026-01-27 08:28:20.130418324 +0000 UTC m=+0.792590336 container died eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c (image=quay.io/ceph/ceph:v18, name=wizardly_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-58369d5af263832a820dd0fc4025e2eb57224b8d6e524648b15565f81d361787-merged.mount: Deactivated successfully.
Jan 27 08:28:20 compute-0 podman[75821]: 2026-01-27 08:28:20.206088481 +0000 UTC m=+0.868260513 container remove eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c (image=quay.io/ceph/ceph:v18, name=wizardly_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:28:20 compute-0 systemd[1]: libpod-conmon-eb14c86577d0fc7efbbca3c0222a12a168eb134b9e2c450e26448422fa36341c.scope: Deactivated successfully.
Jan 27 08:28:20 compute-0 podman[75874]: 2026-01-27 08:28:20.299698503 +0000 UTC m=+0.064491896 container create e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0 (image=quay.io/ceph/ceph:v18, name=determined_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:20 compute-0 systemd[1]: Started libpod-conmon-e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0.scope.
Jan 27 08:28:20 compute-0 podman[75874]: 2026-01-27 08:28:20.272679303 +0000 UTC m=+0.037472766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cdea231fd376f607cc9356d0179e553f5137b3a2a63a91b70f6afc2fd530c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cdea231fd376f607cc9356d0179e553f5137b3a2a63a91b70f6afc2fd530c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cdea231fd376f607cc9356d0179e553f5137b3a2a63a91b70f6afc2fd530c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:20 compute-0 podman[75874]: 2026-01-27 08:28:20.390605213 +0000 UTC m=+0.155398606 container init e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0 (image=quay.io/ceph/ceph:v18, name=determined_wu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:28:20 compute-0 podman[75874]: 2026-01-27 08:28:20.397398637 +0000 UTC m=+0.162192020 container start e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0 (image=quay.io/ceph/ceph:v18, name=determined_wu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:20 compute-0 podman[75874]: 2026-01-27 08:28:20.403152812 +0000 UTC m=+0.167946325 container attach e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0 (image=quay.io/ceph/ceph:v18, name=determined_wu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:28:20 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:20 compute-0 determined_wu[75890]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsLg3fEgomSWgI6Gr2TNQULuLtp52wpTvtAelPjmkirnWK4QZYtKR7hOTAPNDF5s3C8S/G94JMyBQ7i1E65z9l6W/ySU7UWj1Gei8KN8AiHgirYY/fAHuN+vFCeQrW8bfECec9mooLBsTwYtt39J6xz69uFZQserR9BgjoGLNCp1zua6Orbvql+U3Zkz7ZT5SFVFPKCz2SQCglilH0fhkyQU1kZjBnA82v4GhqtJgzwsWmdbdxxD9jNYHmWKXbeiP+xP289bJlFdmkaAa82ewFFNa3vDjte5R1eMphcmjdPOxL2kZ8g/q05gY87mXSMq8wNOCrobchlxMtn4PIp3aP0hQtpV+5nDByXvp3WVKPQRguLbUyI+nGagOOsbibci6K4vYpQOOnaVATKDUr4qpBZaF0uRq082lSSoHojvRXuFbBJc5GK0nQdqpTyz+vTnqOIv+G3vkp41JODu4NaT0NhrTTLyoFtK6ECGX2/IdSOBnjBXrCv0ldCKLciNYGpms= zuul@controller
Jan 27 08:28:20 compute-0 systemd[1]: libpod-e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0.scope: Deactivated successfully.
Jan 27 08:28:20 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:20 compute-0 podman[75916]: 2026-01-27 08:28:20.966214856 +0000 UTC m=+0.024159394 container died e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0 (image=quay.io/ceph/ceph:v18, name=determined_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-36cdea231fd376f607cc9356d0179e553f5137b3a2a63a91b70f6afc2fd530c2-merged.mount: Deactivated successfully.
Jan 27 08:28:21 compute-0 podman[75916]: 2026-01-27 08:28:21.025336256 +0000 UTC m=+0.083280784 container remove e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0 (image=quay.io/ceph/ceph:v18, name=determined_wu, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:21 compute-0 systemd[1]: libpod-conmon-e397ae670286f4e313fdece0a81be87806aeaeb7e420dec37bc850f2395a7dd0.scope: Deactivated successfully.
Jan 27 08:28:21 compute-0 podman[75932]: 2026-01-27 08:28:21.094488957 +0000 UTC m=+0.043978871 container create ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a (image=quay.io/ceph/ceph:v18, name=cranky_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:28:21 compute-0 systemd[1]: Started libpod-conmon-ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a.scope.
Jan 27 08:28:21 compute-0 ceph-mon[74357]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:21 compute-0 ceph-mon[74357]: Set ssh ssh_identity_pub
Jan 27 08:28:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb755d7639a0465fa6bb4ff1d032fd4a409753b5e46a6c54bbd4a7ec46e859f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb755d7639a0465fa6bb4ff1d032fd4a409753b5e46a6c54bbd4a7ec46e859f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb755d7639a0465fa6bb4ff1d032fd4a409753b5e46a6c54bbd4a7ec46e859f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:21 compute-0 podman[75932]: 2026-01-27 08:28:21.162755864 +0000 UTC m=+0.112245798 container init ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a (image=quay.io/ceph/ceph:v18, name=cranky_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:21 compute-0 podman[75932]: 2026-01-27 08:28:21.167216995 +0000 UTC m=+0.116706909 container start ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a (image=quay.io/ceph/ceph:v18, name=cranky_bouman, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:21 compute-0 podman[75932]: 2026-01-27 08:28:21.073608263 +0000 UTC m=+0.023098197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:21 compute-0 podman[75932]: 2026-01-27 08:28:21.178593672 +0000 UTC m=+0.128083877 container attach ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a (image=quay.io/ceph/ceph:v18, name=cranky_bouman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:28:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052984 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:21 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:21 compute-0 sshd-session[75974]: Accepted publickey for ceph-admin from 192.168.122.100 port 37408 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:21 compute-0 systemd-logind[799]: New session 21 of user ceph-admin.
Jan 27 08:28:21 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 27 08:28:21 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 27 08:28:21 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 27 08:28:21 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 27 08:28:21 compute-0 systemd[75978]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:22 compute-0 sshd-session[75981]: Accepted publickey for ceph-admin from 192.168.122.100 port 37410 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:22 compute-0 systemd-logind[799]: New session 23 of user ceph-admin.
Jan 27 08:28:22 compute-0 systemd[75978]: Queued start job for default target Main User Target.
Jan 27 08:28:22 compute-0 ceph-mon[74357]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:22 compute-0 ceph-mon[74357]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:22 compute-0 systemd[75978]: Created slice User Application Slice.
Jan 27 08:28:22 compute-0 systemd[75978]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 08:28:22 compute-0 systemd[75978]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 08:28:22 compute-0 systemd[75978]: Reached target Paths.
Jan 27 08:28:22 compute-0 systemd[75978]: Reached target Timers.
Jan 27 08:28:22 compute-0 systemd[75978]: Starting D-Bus User Message Bus Socket...
Jan 27 08:28:22 compute-0 systemd[75978]: Starting Create User's Volatile Files and Directories...
Jan 27 08:28:22 compute-0 systemd[75978]: Finished Create User's Volatile Files and Directories.
Jan 27 08:28:22 compute-0 systemd[75978]: Listening on D-Bus User Message Bus Socket.
Jan 27 08:28:22 compute-0 systemd[75978]: Reached target Sockets.
Jan 27 08:28:22 compute-0 systemd[75978]: Reached target Basic System.
Jan 27 08:28:22 compute-0 systemd[75978]: Reached target Main User Target.
Jan 27 08:28:22 compute-0 systemd[75978]: Startup finished in 147ms.
Jan 27 08:28:22 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 27 08:28:22 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 27 08:28:22 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 27 08:28:22 compute-0 sshd-session[75974]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:22 compute-0 sshd-session[75981]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:22 compute-0 sudo[75998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:22 compute-0 sudo[75998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:22 compute-0 sudo[75998]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:22 compute-0 sudo[76023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:22 compute-0 sudo[76023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:22 compute-0 sudo[76023]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:22 compute-0 sshd-session[76048]: Accepted publickey for ceph-admin from 192.168.122.100 port 37412 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:22 compute-0 systemd-logind[799]: New session 24 of user ceph-admin.
Jan 27 08:28:22 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 27 08:28:22 compute-0 sshd-session[76048]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:22 compute-0 sudo[76052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:22 compute-0 sudo[76052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:22 compute-0 sudo[76052]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:22 compute-0 sudo[76077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 27 08:28:22 compute-0 sudo[76077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:22 compute-0 sudo[76077]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:22 compute-0 sshd-session[76102]: Accepted publickey for ceph-admin from 192.168.122.100 port 37428 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:22 compute-0 systemd-logind[799]: New session 25 of user ceph-admin.
Jan 27 08:28:22 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 27 08:28:22 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:22 compute-0 sshd-session[76102]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:23 compute-0 sudo[76106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:23 compute-0 sudo[76106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:23 compute-0 sudo[76106]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:23 compute-0 sudo[76131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 27 08:28:23 compute-0 sudo[76131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:23 compute-0 sudo[76131]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:23 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 27 08:28:23 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 27 08:28:23 compute-0 sshd-session[76156]: Accepted publickey for ceph-admin from 192.168.122.100 port 37442 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:23 compute-0 systemd-logind[799]: New session 26 of user ceph-admin.
Jan 27 08:28:23 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 27 08:28:23 compute-0 sshd-session[76156]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:23 compute-0 sudo[76160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:23 compute-0 sudo[76160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:23 compute-0 sudo[76160]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:23 compute-0 sudo[76185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:23 compute-0 sudo[76185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:23 compute-0 sudo[76185]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:23 compute-0 sshd-session[76210]: Accepted publickey for ceph-admin from 192.168.122.100 port 37454 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:23 compute-0 systemd-logind[799]: New session 27 of user ceph-admin.
Jan 27 08:28:23 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 27 08:28:23 compute-0 sshd-session[76210]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:23 compute-0 sudo[76214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:23 compute-0 sudo[76214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:23 compute-0 sudo[76214]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:23 compute-0 ceph-mon[74357]: Deploying cephadm binary to compute-0
Jan 27 08:28:23 compute-0 sudo[76239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:23 compute-0 sudo[76239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:23 compute-0 sudo[76239]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:24 compute-0 sshd-session[76264]: Accepted publickey for ceph-admin from 192.168.122.100 port 37458 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:24 compute-0 systemd-logind[799]: New session 28 of user ceph-admin.
Jan 27 08:28:24 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 27 08:28:24 compute-0 sshd-session[76264]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:24 compute-0 sudo[76268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:24 compute-0 sudo[76268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:24 compute-0 sudo[76268]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:24 compute-0 sudo[76293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 27 08:28:24 compute-0 sudo[76293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:24 compute-0 sudo[76293]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:24 compute-0 sshd-session[76318]: Accepted publickey for ceph-admin from 192.168.122.100 port 37472 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:24 compute-0 systemd-logind[799]: New session 29 of user ceph-admin.
Jan 27 08:28:24 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 27 08:28:24 compute-0 sshd-session[76318]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:24 compute-0 sudo[76322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:24 compute-0 sudo[76322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:24 compute-0 sudo[76322]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:24 compute-0 sudo[76347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:24 compute-0 sudo[76347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:24 compute-0 sudo[76347]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:24 compute-0 sshd-session[76372]: Accepted publickey for ceph-admin from 192.168.122.100 port 37476 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:24 compute-0 systemd-logind[799]: New session 30 of user ceph-admin.
Jan 27 08:28:24 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 27 08:28:24 compute-0 sshd-session[76372]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:24 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:24 compute-0 sudo[76376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:24 compute-0 sudo[76376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:24 compute-0 sudo[76376]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:25 compute-0 sudo[76401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 27 08:28:25 compute-0 sudo[76401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:25 compute-0 sudo[76401]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:25 compute-0 sshd-session[76426]: Accepted publickey for ceph-admin from 192.168.122.100 port 37480 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:25 compute-0 systemd-logind[799]: New session 31 of user ceph-admin.
Jan 27 08:28:25 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 27 08:28:25 compute-0 sshd-session[76426]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:25 compute-0 sshd-session[76453]: Accepted publickey for ceph-admin from 192.168.122.100 port 37496 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:25 compute-0 systemd-logind[799]: New session 32 of user ceph-admin.
Jan 27 08:28:25 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 27 08:28:25 compute-0 sshd-session[76453]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:25 compute-0 sudo[76457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:25 compute-0 sudo[76457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:25 compute-0 sudo[76457]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 sudo[76482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 27 08:28:26 compute-0 sudo[76482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 sudo[76482]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 sshd-session[76507]: Accepted publickey for ceph-admin from 192.168.122.100 port 37508 ssh2: RSA SHA256:dBEqqZNObdFPmdYQ/qZHFwe5QOlH2kWKbrkEMIivtcY
Jan 27 08:28:26 compute-0 systemd-logind[799]: New session 33 of user ceph-admin.
Jan 27 08:28:26 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 27 08:28:26 compute-0 sshd-session[76507]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 27 08:28:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:26 compute-0 sudo[76511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:26 compute-0 sudo[76511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 sudo[76511]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 sudo[76536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 27 08:28:26 compute-0 sudo[76536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 sudo[76536]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:26 compute-0 ceph-mgr[74650]: [cephadm INFO root] Added host compute-0
Jan 27 08:28:26 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 27 08:28:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 27 08:28:26 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:26 compute-0 cranky_bouman[75948]: Added host 'compute-0' with addr '192.168.122.100'
Jan 27 08:28:26 compute-0 systemd[1]: libpod-ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a.scope: Deactivated successfully.
Jan 27 08:28:26 compute-0 sudo[76581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:26 compute-0 sudo[76581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 sudo[76581]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 podman[76595]: 2026-01-27 08:28:26.774130544 +0000 UTC m=+0.036991603 container died ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a (image=quay.io/ceph/ceph:v18, name=cranky_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:28:26 compute-0 sudo[76618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:26 compute-0 sudo[76618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 sudo[76618]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb755d7639a0465fa6bb4ff1d032fd4a409753b5e46a6c54bbd4a7ec46e859f-merged.mount: Deactivated successfully.
Jan 27 08:28:26 compute-0 sudo[76647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:26 compute-0 sudo[76647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 sudo[76647]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:26 compute-0 podman[76595]: 2026-01-27 08:28:26.891601531 +0000 UTC m=+0.154462580 container remove ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a (image=quay.io/ceph/ceph:v18, name=cranky_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:28:26 compute-0 systemd[1]: libpod-conmon-ec66ecb1bce2d223490e9f802acb08190eb4a9b7a12fc3b1f4ed0e8859d6698a.scope: Deactivated successfully.
Jan 27 08:28:26 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:26 compute-0 sudo[76672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Jan 27 08:28:26 compute-0 sudo[76672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:26 compute-0 podman[76676]: 2026-01-27 08:28:26.964786682 +0000 UTC m=+0.044691860 container create 1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60 (image=quay.io/ceph/ceph:v18, name=happy_bell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:28:27 compute-0 systemd[1]: Started libpod-conmon-1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60.scope.
Jan 27 08:28:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1187d386b8255106732014028d3744d710881db237dfbca073afb04173ee07/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1187d386b8255106732014028d3744d710881db237dfbca073afb04173ee07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1187d386b8255106732014028d3744d710881db237dfbca073afb04173ee07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:27 compute-0 podman[76676]: 2026-01-27 08:28:26.946516687 +0000 UTC m=+0.026421885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:27 compute-0 podman[76676]: 2026-01-27 08:28:27.051743025 +0000 UTC m=+0.131648233 container init 1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60 (image=quay.io/ceph/ceph:v18, name=happy_bell, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:27 compute-0 podman[76676]: 2026-01-27 08:28:27.061145749 +0000 UTC m=+0.141050927 container start 1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60 (image=quay.io/ceph/ceph:v18, name=happy_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:27 compute-0 podman[76676]: 2026-01-27 08:28:27.064790487 +0000 UTC m=+0.144695695 container attach 1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60 (image=quay.io/ceph/ceph:v18, name=happy_bell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.215124085 +0000 UTC m=+0.049965043 container create a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a (image=quay.io/ceph/ceph:v18, name=keen_lichterman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:28:27 compute-0 systemd[1]: Started libpod-conmon-a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a.scope.
Jan 27 08:28:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.286743263 +0000 UTC m=+0.121584251 container init a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a (image=quay.io/ceph/ceph:v18, name=keen_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.192463342 +0000 UTC m=+0.027304330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.293382162 +0000 UTC m=+0.128223120 container start a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a (image=quay.io/ceph/ceph:v18, name=keen_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.302151419 +0000 UTC m=+0.136992397 container attach a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a (image=quay.io/ceph/ceph:v18, name=keen_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:27 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:27 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 27 08:28:27 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 27 08:28:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 27 08:28:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:27 compute-0 keen_lichterman[76762]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 27 08:28:27 compute-0 happy_bell[76714]: Scheduled mon update...
Jan 27 08:28:27 compute-0 systemd[1]: libpod-a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a.scope: Deactivated successfully.
Jan 27 08:28:27 compute-0 conmon[76762]: conmon a78e04c058eada7c7255 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a.scope/container/memory.events
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.617696307 +0000 UTC m=+0.452537275 container died a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a (image=quay.io/ceph/ceph:v18, name=keen_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:28:27 compute-0 systemd[1]: libpod-1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60.scope: Deactivated successfully.
Jan 27 08:28:27 compute-0 podman[76676]: 2026-01-27 08:28:27.62707662 +0000 UTC m=+0.706981798 container died 1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60 (image=quay.io/ceph/ceph:v18, name=happy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8fef95ae386bf19af92638dff0ef5c0f4b1fb5a576294fb2259a82714085309-merged.mount: Deactivated successfully.
Jan 27 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f1187d386b8255106732014028d3744d710881db237dfbca073afb04173ee07-merged.mount: Deactivated successfully.
Jan 27 08:28:27 compute-0 podman[76676]: 2026-01-27 08:28:27.692141781 +0000 UTC m=+0.772046969 container remove 1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60 (image=quay.io/ceph/ceph:v18, name=happy_bell, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:27 compute-0 ceph-mon[74357]: Added host compute-0
Jan 27 08:28:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:28:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:27 compute-0 podman[76745]: 2026-01-27 08:28:27.706080938 +0000 UTC m=+0.540921896 container remove a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a (image=quay.io/ceph/ceph:v18, name=keen_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:28:27 compute-0 systemd[1]: libpod-conmon-a78e04c058eada7c7255a7eb510c191787eec54d969bd5892b7b0b3c1655413a.scope: Deactivated successfully.
Jan 27 08:28:27 compute-0 systemd[1]: libpod-conmon-1417fb6f18400f0b2a0d36d7823c59e3e140b2649ff5f009f00fdf461856eb60.scope: Deactivated successfully.
Jan 27 08:28:27 compute-0 sudo[76672]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 27 08:28:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:27 compute-0 podman[76811]: 2026-01-27 08:28:27.752964446 +0000 UTC m=+0.043744264 container create 985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01 (image=quay.io/ceph/ceph:v18, name=beautiful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 08:28:27 compute-0 systemd[1]: Started libpod-conmon-985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01.scope.
Jan 27 08:28:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:27 compute-0 sudo[76826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:27 compute-0 sudo[76826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91415b9a15b33d6dc4cbf54cef0fbc03f382f736f23c13eeda90445e5141646a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91415b9a15b33d6dc4cbf54cef0fbc03f382f736f23c13eeda90445e5141646a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91415b9a15b33d6dc4cbf54cef0fbc03f382f736f23c13eeda90445e5141646a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:27 compute-0 sudo[76826]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:27 compute-0 podman[76811]: 2026-01-27 08:28:27.828341126 +0000 UTC m=+0.119120964 container init 985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01 (image=quay.io/ceph/ceph:v18, name=beautiful_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:27 compute-0 podman[76811]: 2026-01-27 08:28:27.73349847 +0000 UTC m=+0.024278338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:27 compute-0 podman[76811]: 2026-01-27 08:28:27.834442201 +0000 UTC m=+0.125222009 container start 985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01 (image=quay.io/ceph/ceph:v18, name=beautiful_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:27 compute-0 podman[76811]: 2026-01-27 08:28:27.840242258 +0000 UTC m=+0.131022076 container attach 985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01 (image=quay.io/ceph/ceph:v18, name=beautiful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:27 compute-0 sudo[76856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:27 compute-0 sudo[76856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:27 compute-0 sudo[76856]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:27 compute-0 sudo[76883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:27 compute-0 sudo[76883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:27 compute-0 sudo[76883]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:27 compute-0 sudo[76908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 27 08:28:27 compute-0 sudo[76908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:28 compute-0 sudo[76908]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:28 compute-0 sudo[76973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:28 compute-0 sudo[76973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:28 compute-0 sudo[76973]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:28 compute-0 sudo[76998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:28 compute-0 sudo[76998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:28 compute-0 sudo[76998]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:28 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:28 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 27 08:28:28 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 27 08:28:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 27 08:28:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:28 compute-0 beautiful_mclaren[76848]: Scheduled mgr update...
Jan 27 08:28:28 compute-0 systemd[1]: libpod-985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01.scope: Deactivated successfully.
Jan 27 08:28:28 compute-0 podman[76811]: 2026-01-27 08:28:28.395529981 +0000 UTC m=+0.686309799 container died 985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01 (image=quay.io/ceph/ceph:v18, name=beautiful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 27 08:28:28 compute-0 sudo[77023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:28 compute-0 sudo[77023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:28 compute-0 sudo[77023]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-91415b9a15b33d6dc4cbf54cef0fbc03f382f736f23c13eeda90445e5141646a-merged.mount: Deactivated successfully.
Jan 27 08:28:28 compute-0 podman[76811]: 2026-01-27 08:28:28.434254449 +0000 UTC m=+0.725034267 container remove 985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01 (image=quay.io/ceph/ceph:v18, name=beautiful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:28:28 compute-0 systemd[1]: libpod-conmon-985ab58ca2155ef8b75ecde3939475afe177ea3ff96cc78e61ce7fd2cb84dc01.scope: Deactivated successfully.
Jan 27 08:28:28 compute-0 sudo[77057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:28:28 compute-0 sudo[77057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:28 compute-0 podman[77088]: 2026-01-27 08:28:28.492947907 +0000 UTC m=+0.041395331 container create eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b (image=quay.io/ceph/ceph:v18, name=lucid_benz, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:28 compute-0 systemd[1]: Started libpod-conmon-eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b.scope.
Jan 27 08:28:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aad3832149ef820be539ae9ea26d3b1e9440c15b8020a6c8fd38fe22d453348/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aad3832149ef820be539ae9ea26d3b1e9440c15b8020a6c8fd38fe22d453348/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aad3832149ef820be539ae9ea26d3b1e9440c15b8020a6c8fd38fe22d453348/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:28 compute-0 podman[77088]: 2026-01-27 08:28:28.561120871 +0000 UTC m=+0.109568315 container init eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b (image=quay.io/ceph/ceph:v18, name=lucid_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:28 compute-0 podman[77088]: 2026-01-27 08:28:28.567101944 +0000 UTC m=+0.115549378 container start eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b (image=quay.io/ceph/ceph:v18, name=lucid_benz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 08:28:28 compute-0 podman[77088]: 2026-01-27 08:28:28.472798822 +0000 UTC m=+0.021246286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:28 compute-0 podman[77088]: 2026-01-27 08:28:28.570703291 +0000 UTC m=+0.119150715 container attach eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b (image=quay.io/ceph/ceph:v18, name=lucid_benz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:28:28 compute-0 ceph-mon[74357]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:28 compute-0 ceph-mon[74357]: Saving service mon spec with placement count:5
Jan 27 08:28:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:28 compute-0 podman[77183]: 2026-01-27 08:28:28.900425612 +0000 UTC m=+0.054944358 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:28 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:29 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:29 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service crash spec with placement *
Jan 27 08:28:29 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 27 08:28:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:28:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:29 compute-0 lucid_benz[77106]: Scheduled crash update...
Jan 27 08:28:29 compute-0 podman[77183]: 2026-01-27 08:28:29.185806022 +0000 UTC m=+0.340324728 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:28:29 compute-0 systemd[1]: libpod-eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b.scope: Deactivated successfully.
Jan 27 08:28:29 compute-0 podman[77088]: 2026-01-27 08:28:29.195500315 +0000 UTC m=+0.743947759 container died eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b (image=quay.io/ceph/ceph:v18, name=lucid_benz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aad3832149ef820be539ae9ea26d3b1e9440c15b8020a6c8fd38fe22d453348-merged.mount: Deactivated successfully.
Jan 27 08:28:29 compute-0 podman[77088]: 2026-01-27 08:28:29.251752877 +0000 UTC m=+0.800200291 container remove eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b (image=quay.io/ceph/ceph:v18, name=lucid_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:29 compute-0 systemd[1]: libpod-conmon-eaf2637a1898637f1a9fac9038ec3545a0d9d58fd84803ba73a3fe85741a2e4b.scope: Deactivated successfully.
Jan 27 08:28:29 compute-0 sudo[77057]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:29 compute-0 podman[77265]: 2026-01-27 08:28:29.313721514 +0000 UTC m=+0.045788920 container create e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735 (image=quay.io/ceph/ceph:v18, name=determined_wilbur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:29 compute-0 systemd[1]: Started libpod-conmon-e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735.scope.
Jan 27 08:28:29 compute-0 sudo[77281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:29 compute-0 sudo[77281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:29 compute-0 sudo[77281]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23abc01a672fdb2c2d18bd6b23a2e4fc5fae01ee743a0794e1a900a895dd0517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23abc01a672fdb2c2d18bd6b23a2e4fc5fae01ee743a0794e1a900a895dd0517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23abc01a672fdb2c2d18bd6b23a2e4fc5fae01ee743a0794e1a900a895dd0517/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:29 compute-0 podman[77265]: 2026-01-27 08:28:29.290871495 +0000 UTC m=+0.022938921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:29 compute-0 podman[77265]: 2026-01-27 08:28:29.391398875 +0000 UTC m=+0.123466301 container init e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735 (image=quay.io/ceph/ceph:v18, name=determined_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:28:29 compute-0 podman[77265]: 2026-01-27 08:28:29.397553012 +0000 UTC m=+0.129620438 container start e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735 (image=quay.io/ceph/ceph:v18, name=determined_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:28:29 compute-0 podman[77265]: 2026-01-27 08:28:29.401085207 +0000 UTC m=+0.133152613 container attach e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735 (image=quay.io/ceph/ceph:v18, name=determined_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:28:29 compute-0 sudo[77312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:29 compute-0 sudo[77312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:29 compute-0 sudo[77312]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:29 compute-0 sudo[77339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:29 compute-0 sudo[77339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:29 compute-0 sudo[77339]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:29 compute-0 sudo[77364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:28:29 compute-0 sudo[77364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:29 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77401 (sysctl)
Jan 27 08:28:29 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 27 08:28:29 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 27 08:28:29 compute-0 ceph-mon[74357]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:29 compute-0 ceph-mon[74357]: Saving service mgr spec with placement count:2
Jan 27 08:28:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 27 08:28:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2767736552' entity='client.admin' 
Jan 27 08:28:29 compute-0 systemd[1]: libpod-e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735.scope: Deactivated successfully.
Jan 27 08:28:29 compute-0 podman[77265]: 2026-01-27 08:28:29.958418926 +0000 UTC m=+0.690486332 container died e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735 (image=quay.io/ceph/ceph:v18, name=determined_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-23abc01a672fdb2c2d18bd6b23a2e4fc5fae01ee743a0794e1a900a895dd0517-merged.mount: Deactivated successfully.
Jan 27 08:28:30 compute-0 sudo[77364]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 podman[77265]: 2026-01-27 08:28:30.014349489 +0000 UTC m=+0.746416915 container remove e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735 (image=quay.io/ceph/ceph:v18, name=determined_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:30 compute-0 systemd[1]: libpod-conmon-e0c3ddec9b7ed6ac046bb6343f3a36bcfebc2928ba44dc85f06648fa84cc6735.scope: Deactivated successfully.
Jan 27 08:28:30 compute-0 sudo[77459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.078777512 +0000 UTC m=+0.045203713 container create a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c (image=quay.io/ceph/ceph:v18, name=practical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:28:30 compute-0 sudo[77459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 sudo[77459]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 systemd[1]: Started libpod-conmon-a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c.scope.
Jan 27 08:28:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:30 compute-0 sudo[77497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:30 compute-0 sudo[77497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55cadf38a92c2b1b930513b6a0c5e954c73ad9528d763b48f169cfd7238c394/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55cadf38a92c2b1b930513b6a0c5e954c73ad9528d763b48f169cfd7238c394/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55cadf38a92c2b1b930513b6a0c5e954c73ad9528d763b48f169cfd7238c394/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:30 compute-0 sudo[77497]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.056181251 +0000 UTC m=+0.022607462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.161678976 +0000 UTC m=+0.128105187 container init a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c (image=quay.io/ceph/ceph:v18, name=practical_tharp, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.169943949 +0000 UTC m=+0.136370140 container start a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c (image=quay.io/ceph/ceph:v18, name=practical_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.175825628 +0000 UTC m=+0.142251819 container attach a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c (image=quay.io/ceph/ceph:v18, name=practical_tharp, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:30 compute-0 sudo[77527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:30 compute-0 sudo[77527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 sudo[77527]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 sudo[77554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 27 08:28:30 compute-0 sudo[77554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 sudo[77554]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:30 compute-0 sudo[77597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:30 compute-0 sudo[77597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 sudo[77597]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 sudo[77641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:30 compute-0 sudo[77641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 sudo[77641]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 sudo[77666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:30 compute-0 sudo[77666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 sudo[77666]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:30 compute-0 sudo[77691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- inventory --format=json-pretty --filter-for-batch
Jan 27 08:28:30 compute-0 sudo[77691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:30 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 27 08:28:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:30 compute-0 ceph-mon[74357]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:30 compute-0 ceph-mon[74357]: Saving service crash spec with placement *
Jan 27 08:28:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2767736552' entity='client.admin' 
Jan 27 08:28:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:30 compute-0 ceph-mon[74357]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:30 compute-0 systemd[1]: libpod-a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c.scope: Deactivated successfully.
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.846981616 +0000 UTC m=+0.813407817 container died a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c (image=quay.io/ceph/ceph:v18, name=practical_tharp, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55cadf38a92c2b1b930513b6a0c5e954c73ad9528d763b48f169cfd7238c394-merged.mount: Deactivated successfully.
Jan 27 08:28:30 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:30 compute-0 podman[77458]: 2026-01-27 08:28:30.960304193 +0000 UTC m=+0.926730384 container remove a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c (image=quay.io/ceph/ceph:v18, name=practical_tharp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 08:28:30 compute-0 systemd[1]: libpod-conmon-a830f38dfef332f7c787072f56800d3eac3323db9e7851eed575ee18e9a1e16c.scope: Deactivated successfully.
Jan 27 08:28:31 compute-0 podman[77754]: 2026-01-27 08:28:31.024254223 +0000 UTC m=+0.045870222 container create f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba (image=quay.io/ceph/ceph:v18, name=sharp_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:28:31 compute-0 systemd[1]: Started libpod-conmon-f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba.scope.
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.072786216 +0000 UTC m=+0.047794304 container create b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:28:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed671457b532e69ea735f5710ab81331a1100793d3fc7962a46c28b840047307/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed671457b532e69ea735f5710ab81331a1100793d3fc7962a46c28b840047307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed671457b532e69ea735f5710ab81331a1100793d3fc7962a46c28b840047307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:31 compute-0 podman[77754]: 2026-01-27 08:28:31.002367551 +0000 UTC m=+0.023983580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:31 compute-0 podman[77754]: 2026-01-27 08:28:31.100741662 +0000 UTC m=+0.122357681 container init f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba (image=quay.io/ceph/ceph:v18, name=sharp_clarke, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:31 compute-0 systemd[1]: Started libpod-conmon-b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60.scope.
Jan 27 08:28:31 compute-0 podman[77754]: 2026-01-27 08:28:31.10692147 +0000 UTC m=+0.128537469 container start f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba (image=quay.io/ceph/ceph:v18, name=sharp_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:28:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:31 compute-0 podman[77754]: 2026-01-27 08:28:31.115922533 +0000 UTC m=+0.137538552 container attach f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba (image=quay.io/ceph/ceph:v18, name=sharp_clarke, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.130265652 +0000 UTC m=+0.105273740 container init b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sutherland, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.135779851 +0000 UTC m=+0.110787929 container start b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:28:31 compute-0 silly_sutherland[77803]: 167 167
Jan 27 08:28:31 compute-0 systemd[1]: libpod-b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60.scope: Deactivated successfully.
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.052651912 +0000 UTC m=+0.027660010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.149542822 +0000 UTC m=+0.124550900 container attach b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.150508949 +0000 UTC m=+0.125517047 container died b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-63fbce3d9bfcf15e73602badaba8e06779553b9029f3c54b19400506f26fddd2-merged.mount: Deactivated successfully.
Jan 27 08:28:31 compute-0 podman[77780]: 2026-01-27 08:28:31.217760029 +0000 UTC m=+0.192768107 container remove b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:28:31 compute-0 systemd[1]: libpod-conmon-b0ad436576335cac1a8eb08b098c15eb6a6736c3e5b45937ec42ed4a85335c60.scope: Deactivated successfully.
Jan 27 08:28:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:31 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:31 compute-0 ceph-mgr[74650]: [cephadm INFO root] Added label _admin to host compute-0
Jan 27 08:28:31 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 27 08:28:31 compute-0 sharp_clarke[77797]: Added label _admin to host compute-0
Jan 27 08:28:31 compute-0 systemd[1]: libpod-f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba.scope: Deactivated successfully.
Jan 27 08:28:31 compute-0 podman[77843]: 2026-01-27 08:28:31.714651992 +0000 UTC m=+0.022872470 container died f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba (image=quay.io/ceph/ceph:v18, name=sharp_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed671457b532e69ea735f5710ab81331a1100793d3fc7962a46c28b840047307-merged.mount: Deactivated successfully.
Jan 27 08:28:31 compute-0 podman[77843]: 2026-01-27 08:28:31.767544843 +0000 UTC m=+0.075765311 container remove f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba (image=quay.io/ceph/ceph:v18, name=sharp_clarke, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:28:31 compute-0 systemd[1]: libpod-conmon-f1504774d53c249b50b44a88e8b338c998d86e2cf362396f8da631de4c81c3ba.scope: Deactivated successfully.
Jan 27 08:28:31 compute-0 podman[77858]: 2026-01-27 08:28:31.830362402 +0000 UTC m=+0.040638700 container create 05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714 (image=quay.io/ceph/ceph:v18, name=heuristic_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:31 compute-0 ceph-mon[74357]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:31 compute-0 ceph-mon[74357]: Added label _admin to host compute-0
Jan 27 08:28:31 compute-0 systemd[1]: Started libpod-conmon-05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714.scope.
Jan 27 08:28:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a382f4650d3b787346464757ea644cf98438a849e21b37bcff997376b014522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a382f4650d3b787346464757ea644cf98438a849e21b37bcff997376b014522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a382f4650d3b787346464757ea644cf98438a849e21b37bcff997376b014522/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:31 compute-0 podman[77858]: 2026-01-27 08:28:31.897798748 +0000 UTC m=+0.108075136 container init 05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714 (image=quay.io/ceph/ceph:v18, name=heuristic_bartik, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:28:31 compute-0 podman[77858]: 2026-01-27 08:28:31.903971555 +0000 UTC m=+0.114247853 container start 05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714 (image=quay.io/ceph/ceph:v18, name=heuristic_bartik, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:28:31 compute-0 podman[77858]: 2026-01-27 08:28:31.813403004 +0000 UTC m=+0.023679332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:31 compute-0 podman[77858]: 2026-01-27 08:28:31.908522567 +0000 UTC m=+0.118798885 container attach 05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714 (image=quay.io/ceph/ceph:v18, name=heuristic_bartik, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 27 08:28:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 27 08:28:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4282422156' entity='client.admin' 
Jan 27 08:28:32 compute-0 systemd[1]: libpod-05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714.scope: Deactivated successfully.
Jan 27 08:28:32 compute-0 podman[77858]: 2026-01-27 08:28:32.474462219 +0000 UTC m=+0.684738527 container died 05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714 (image=quay.io/ceph/ceph:v18, name=heuristic_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a382f4650d3b787346464757ea644cf98438a849e21b37bcff997376b014522-merged.mount: Deactivated successfully.
Jan 27 08:28:32 compute-0 podman[77858]: 2026-01-27 08:28:32.532676894 +0000 UTC m=+0.742953232 container remove 05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714 (image=quay.io/ceph/ceph:v18, name=heuristic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:28:32 compute-0 systemd[1]: libpod-conmon-05b4e571e2d375fb6754775ef773fb27c6b12a152ebf740ff3356f61d084a714.scope: Deactivated successfully.
Jan 27 08:28:32 compute-0 podman[77915]: 2026-01-27 08:28:32.599338378 +0000 UTC m=+0.042750577 container create ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715 (image=quay.io/ceph/ceph:v18, name=vigorous_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:32 compute-0 systemd[1]: Started libpod-conmon-ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715.scope.
Jan 27 08:28:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607a6e231352fe4aaddd997e36ce59ab0a49209504ddc17296fd2aa10fb0c38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607a6e231352fe4aaddd997e36ce59ab0a49209504ddc17296fd2aa10fb0c38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607a6e231352fe4aaddd997e36ce59ab0a49209504ddc17296fd2aa10fb0c38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:32 compute-0 podman[77915]: 2026-01-27 08:28:32.674192043 +0000 UTC m=+0.117604232 container init ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715 (image=quay.io/ceph/ceph:v18, name=vigorous_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:28:32 compute-0 podman[77915]: 2026-01-27 08:28:32.579558093 +0000 UTC m=+0.022970312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:32 compute-0 podman[77915]: 2026-01-27 08:28:32.679057334 +0000 UTC m=+0.122469533 container start ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715 (image=quay.io/ceph/ceph:v18, name=vigorous_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 27 08:28:32 compute-0 podman[77915]: 2026-01-27 08:28:32.682665483 +0000 UTC m=+0.126077712 container attach ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715 (image=quay.io/ceph/ceph:v18, name=vigorous_bardeen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:28:32 compute-0 ceph-mgr[74650]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:28:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 27 08:28:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2348208179' entity='client.admin' 
Jan 27 08:28:33 compute-0 vigorous_bardeen[77931]: set mgr/dashboard/cluster/status
Jan 27 08:28:33 compute-0 systemd[1]: libpod-ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715.scope: Deactivated successfully.
Jan 27 08:28:33 compute-0 podman[77915]: 2026-01-27 08:28:33.303472518 +0000 UTC m=+0.746884747 container died ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715 (image=quay.io/ceph/ceph:v18, name=vigorous_bardeen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e607a6e231352fe4aaddd997e36ce59ab0a49209504ddc17296fd2aa10fb0c38-merged.mount: Deactivated successfully.
Jan 27 08:28:33 compute-0 podman[77915]: 2026-01-27 08:28:33.378450827 +0000 UTC m=+0.821863006 container remove ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715 (image=quay.io/ceph/ceph:v18, name=vigorous_bardeen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:28:33 compute-0 systemd[1]: libpod-conmon-ea7949cd9143b97afe146d86ef7115ac6fe75feebaad7cd83634e21c68cfb715.scope: Deactivated successfully.
Jan 27 08:28:33 compute-0 sudo[73347]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4282422156' entity='client.admin' 
Jan 27 08:28:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2348208179' entity='client.admin' 
Jan 27 08:28:33 compute-0 podman[77976]: 2026-01-27 08:28:33.583104334 +0000 UTC m=+0.052001527 container create 62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:28:33 compute-0 systemd[1]: Started libpod-conmon-62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940.scope.
Jan 27 08:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865394710f332d10e1bb45f43198df676bf81c0731c56bbe00edc06f82b16750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865394710f332d10e1bb45f43198df676bf81c0731c56bbe00edc06f82b16750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865394710f332d10e1bb45f43198df676bf81c0731c56bbe00edc06f82b16750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865394710f332d10e1bb45f43198df676bf81c0731c56bbe00edc06f82b16750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:33 compute-0 podman[77976]: 2026-01-27 08:28:33.554166622 +0000 UTC m=+0.023063835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:28:33 compute-0 podman[77976]: 2026-01-27 08:28:33.665622337 +0000 UTC m=+0.134519530 container init 62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:28:33 compute-0 podman[77976]: 2026-01-27 08:28:33.673496759 +0000 UTC m=+0.142393952 container start 62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:33 compute-0 podman[77976]: 2026-01-27 08:28:33.685422733 +0000 UTC m=+0.154319926 container attach 62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:33 compute-0 sudo[78021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rntcpztxpzsmboafvrwsnpgxbbrvapsd ; /usr/bin/python3'
Jan 27 08:28:33 compute-0 sudo[78021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:33 compute-0 python3[78023]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:34.01724415 +0000 UTC m=+0.050569759 container create f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a (image=quay.io/ceph/ceph:v18, name=keen_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:28:34 compute-0 systemd[1]: Started libpod-conmon-f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a.scope.
Jan 27 08:28:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef97a9f9f0dcd90aa04b71fc31858695a75919d5fb545a8d9562b535e0ae77b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef97a9f9f0dcd90aa04b71fc31858695a75919d5fb545a8d9562b535e0ae77b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:33.9891556 +0000 UTC m=+0.022481239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:34.118566201 +0000 UTC m=+0.151891810 container init f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a (image=quay.io/ceph/ceph:v18, name=keen_mahavira, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:34.124102941 +0000 UTC m=+0.157428580 container start f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a (image=quay.io/ceph/ceph:v18, name=keen_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:34.151958655 +0000 UTC m=+0.185284314 container attach f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a (image=quay.io/ceph/ceph:v18, name=keen_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1284038165' entity='client.admin' 
Jan 27 08:28:34 compute-0 systemd[1]: libpod-f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a.scope: Deactivated successfully.
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:34.696615891 +0000 UTC m=+0.729941500 container died f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a (image=quay.io/ceph/ceph:v18, name=keen_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ef97a9f9f0dcd90aa04b71fc31858695a75919d5fb545a8d9562b535e0ae77b-merged.mount: Deactivated successfully.
Jan 27 08:28:34 compute-0 podman[78024]: 2026-01-27 08:28:34.736548362 +0000 UTC m=+0.769873971 container remove f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a (image=quay.io/ceph/ceph:v18, name=keen_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:34 compute-0 systemd[1]: libpod-conmon-f8e2f624f904242133e959db27a15ea379f5a3005cd687d30f70846a4bb9bc8a.scope: Deactivated successfully.
Jan 27 08:28:34 compute-0 sudo[78021]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:34 compute-0 clever_beaver[77993]: [
Jan 27 08:28:34 compute-0 clever_beaver[77993]:     {
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "available": false,
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "ceph_device": false,
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "lsm_data": {},
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "lvs": [],
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "path": "/dev/sr0",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "rejected_reasons": [
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "Insufficient space (<5GB)",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "Has a FileSystem"
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         ],
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         "sys_api": {
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "actuators": null,
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "device_nodes": "sr0",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "devname": "sr0",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "human_readable_size": "482.00 KB",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "id_bus": "ata",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "model": "QEMU DVD-ROM",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "nr_requests": "2",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "parent": "/dev/sr0",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "partitions": {},
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "path": "/dev/sr0",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "removable": "1",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "rev": "2.5+",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "ro": "0",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "rotational": "1",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "sas_address": "",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "sas_device_handle": "",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "scheduler_mode": "mq-deadline",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "sectors": 0,
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "sectorsize": "2048",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "size": 493568.0,
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "support_discard": "2048",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "type": "disk",
Jan 27 08:28:34 compute-0 clever_beaver[77993]:             "vendor": "QEMU"
Jan 27 08:28:34 compute-0 clever_beaver[77993]:         }
Jan 27 08:28:34 compute-0 clever_beaver[77993]:     }
Jan 27 08:28:34 compute-0 clever_beaver[77993]: ]
Jan 27 08:28:34 compute-0 systemd[1]: libpod-62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940.scope: Deactivated successfully.
Jan 27 08:28:34 compute-0 systemd[1]: libpod-62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940.scope: Consumed 1.128s CPU time.
Jan 27 08:28:34 compute-0 conmon[77993]: conmon 62656f6aa39daec013f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940.scope/container/memory.events
Jan 27 08:28:34 compute-0 podman[77976]: 2026-01-27 08:28:34.823826533 +0000 UTC m=+1.292723726 container died 62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-865394710f332d10e1bb45f43198df676bf81c0731c56bbe00edc06f82b16750-merged.mount: Deactivated successfully.
Jan 27 08:28:34 compute-0 podman[77976]: 2026-01-27 08:28:34.88175831 +0000 UTC m=+1.350655503 container remove 62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:28:34 compute-0 systemd[1]: libpod-conmon-62656f6aa39daec013f1e5502842995b18c6b92b025c92f88ab502c42bb11940.scope: Deactivated successfully.
Jan 27 08:28:34 compute-0 sudo[77691]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:34 compute-0 ceph-mgr[74650]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 27 08:28:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:28:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:34 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:28:34 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:28:35 compute-0 sudo[79218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79218]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 27 08:28:35 compute-0 sudo[79243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79243]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79280]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph
Jan 27 08:28:35 compute-0 sudo[79323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79323]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79371]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:28:35 compute-0 sudo[79418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79418]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79443]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:35 compute-0 sudo[79468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79468]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79507]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:28:35 compute-0 sudo[79561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79561]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvjjmepavmdrhofgfseqebhjmzekqbk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769502515.1020823-37172-221729251322905/async_wrapper.py j891207263561 30 /home/zuul/.ansible/tmp/ansible-tmp-1769502515.1020823-37172-221729251322905/AnsiballZ_command.py _'
Jan 27 08:28:35 compute-0 sudo[79616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:35 compute-0 sudo[79641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79641]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:28:35 compute-0 sudo[79666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79666]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1284038165' entity='client.admin' 
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:35 compute-0 ceph-mon[74357]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:35 compute-0 ansible-async_wrapper.py[79638]: Invoked with j891207263561 30 /home/zuul/.ansible/tmp/ansible-tmp-1769502515.1020823-37172-221729251322905/AnsiballZ_command.py _
Jan 27 08:28:35 compute-0 ansible-async_wrapper.py[79716]: Starting module and watcher
Jan 27 08:28:35 compute-0 ansible-async_wrapper.py[79716]: Start watching 79717 (30)
Jan 27 08:28:35 compute-0 ansible-async_wrapper.py[79717]: Start module (79717)
Jan 27 08:28:35 compute-0 ansible-async_wrapper.py[79638]: Return async_wrapper task started.
Jan 27 08:28:35 compute-0 sudo[79691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79691]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79616]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:28:35 compute-0 sudo[79721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79721]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 sudo[79746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79746]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 python3[79718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:35 compute-0 sudo[79771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 27 08:28:35 compute-0 sudo[79771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79771]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:28:35 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:28:35 compute-0 podman[79788]: 2026-01-27 08:28:35.929228341 +0000 UTC m=+0.044576497 container create 2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4 (image=quay.io/ceph/ceph:v18, name=peaceful_edison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:35 compute-0 systemd[1]: Started libpod-conmon-2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4.scope.
Jan 27 08:28:35 compute-0 sudo[79807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:35 compute-0 sudo[79807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:35 compute-0 sudo[79807]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b4ed9f6a5f9dca3ce1b6862cee8c6a0083c108f2b3faa7ce82a3caf855f829/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b4ed9f6a5f9dca3ce1b6862cee8c6a0083c108f2b3faa7ce82a3caf855f829/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:36 compute-0 podman[79788]: 2026-01-27 08:28:35.907732759 +0000 UTC m=+0.023080935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:36 compute-0 podman[79788]: 2026-01-27 08:28:36.008861575 +0000 UTC m=+0.124209781 container init 2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4 (image=quay.io/ceph/ceph:v18, name=peaceful_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:28:36 compute-0 podman[79788]: 2026-01-27 08:28:36.016718257 +0000 UTC m=+0.132066403 container start 2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4 (image=quay.io/ceph/ceph:v18, name=peaceful_edison, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:36 compute-0 podman[79788]: 2026-01-27 08:28:36.020596342 +0000 UTC m=+0.135944498 container attach 2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4 (image=quay.io/ceph/ceph:v18, name=peaceful_edison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:28:36 compute-0 sudo[79839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config
Jan 27 08:28:36 compute-0 sudo[79839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79839]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[79865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[79865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79865]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[79890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config
Jan 27 08:28:36 compute-0 sudo[79890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79890]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[79915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[79915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79915]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[79940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:28:36 compute-0 sudo[79940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79940]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[79965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[79965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79965]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:36 compute-0 sudo[79990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:36 compute-0 sudo[79990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[79990]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[80034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[80034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80034]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[80059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:28:36 compute-0 sudo[80059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80059]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[80107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[80107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80107]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[80132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:28:36 compute-0 sudo[80132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80132]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:28:36 compute-0 peaceful_edison[79835]: 
Jan 27 08:28:36 compute-0 peaceful_edison[79835]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:28:36 compute-0 systemd[1]: libpod-2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4.scope: Deactivated successfully.
Jan 27 08:28:36 compute-0 podman[79788]: 2026-01-27 08:28:36.653250709 +0000 UTC m=+0.768598865 container died 2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4 (image=quay.io/ceph/ceph:v18, name=peaceful_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-69b4ed9f6a5f9dca3ce1b6862cee8c6a0083c108f2b3faa7ce82a3caf855f829-merged.mount: Deactivated successfully.
Jan 27 08:28:36 compute-0 sudo[80158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[80158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80158]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 ceph-mon[74357]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:36 compute-0 ceph-mon[74357]: Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:28:36 compute-0 podman[79788]: 2026-01-27 08:28:36.700668842 +0000 UTC m=+0.816016998 container remove 2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4 (image=quay.io/ceph/ceph:v18, name=peaceful_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:28:36 compute-0 systemd[1]: libpod-conmon-2f643c6015544603a36edbbe191921408273cd1720f4dc76b545628e331c07f4.scope: Deactivated successfully.
Jan 27 08:28:36 compute-0 ansible-async_wrapper.py[79717]: Module complete (79717)
Jan 27 08:28:36 compute-0 sudo[80196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:28:36 compute-0 sudo[80196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80196]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[80221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[80221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80221]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 sudo[80269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:28:36 compute-0 sudo[80269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80269]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:28:36 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:28:36 compute-0 sudo[80294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:36 compute-0 sudo[80294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80294]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:36 compute-0 sudo[80319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 27 08:28:36 compute-0 sudo[80319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:36 compute-0 sudo[80319]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80344]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsnrhnpcyabajaagoedcxiqlybdbapq ; /usr/bin/python3'
Jan 27 08:28:37 compute-0 sudo[80390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:37 compute-0 sudo[80394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph
Jan 27 08:28:37 compute-0 sudo[80394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80394]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80420]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 python3[80395]: ansible-ansible.legacy.async_status Invoked with jid=j891207263561.79638 mode=status _async_dir=/root/.ansible_async
Jan 27 08:28:37 compute-0 sudo[80390]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.client.admin.keyring.new
Jan 27 08:28:37 compute-0 sudo[80445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80445]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80470]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:37 compute-0 sudo[80518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80518]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twduoqelebgmdfgguhozhbaqlpxzgmhv ; /usr/bin/python3'
Jan 27 08:28:37 compute-0 sudo[80564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:37 compute-0 sudo[80568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80568]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.client.admin.keyring.new
Jan 27 08:28:37 compute-0 sudo[80594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80594]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 python3[80569]: ansible-ansible.legacy.async_status Invoked with jid=j891207263561.79638 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 08:28:37 compute-0 sudo[80564]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80642]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.client.admin.keyring.new
Jan 27 08:28:37 compute-0 sudo[80667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80667]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80692]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.client.admin.keyring.new
Jan 27 08:28:37 compute-0 sudo[80717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80717]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 ceph-mon[74357]: Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:28:37 compute-0 ceph-mon[74357]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:28:37 compute-0 sudo[80742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80742]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhjfaujafqtfznkurhkwreijjsygyppm ; /usr/bin/python3'
Jan 27 08:28:37 compute-0 sudo[80791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:37 compute-0 sudo[80790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 27 08:28:37 compute-0 sudo[80790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80790]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:28:37 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:28:37 compute-0 sudo[80818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80818]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 python3[80800]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:28:37 compute-0 sudo[80843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config
Jan 27 08:28:37 compute-0 sudo[80843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80843]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80791]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:37 compute-0 sudo[80870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80870]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:37 compute-0 sudo[80895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config
Jan 27 08:28:37 compute-0 sudo[80895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:37 compute-0 sudo[80895]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 sudo[80920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:38 compute-0 sudo[80920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:38 compute-0 sudo[80920]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 sudo[80945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring.new
Jan 27 08:28:38 compute-0 sudo[80945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:38 compute-0 sudo[80945]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 sudo[80970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:38 compute-0 sudo[80970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:38 compute-0 sudo[80970]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 sudo[80995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:38 compute-0 sudo[80995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:38 compute-0 sudo[80995]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 sudo[81043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otelprtezshzdnbzjfkxarinlipmmarh ; /usr/bin/python3'
Jan 27 08:28:38 compute-0 sudo[81043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:38 compute-0 sudo[81044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:38 compute-0 sudo[81044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:38 compute-0 sudo[81044]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 ceph-mon[74357]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:28:38 compute-0 ceph-mon[74357]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:38 compute-0 sudo[81071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring.new
Jan 27 08:28:38 compute-0 sudo[81071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:38 compute-0 sudo[81071]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:38 compute-0 python3[81053]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:38 compute-0 podman[81116]: 2026-01-27 08:28:38.994299464 +0000 UTC m=+0.041109189 container create 61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a (image=quay.io/ceph/ceph:v18, name=stupefied_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:28:39 compute-0 sudo[81125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:39 compute-0 sudo[81125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81125]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 systemd[1]: Started libpod-conmon-61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a.scope.
Jan 27 08:28:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c40eb62bc4108a23a21d1fde53c7d4bf014cb37bdb31d245b33585a94615f92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c40eb62bc4108a23a21d1fde53c7d4bf014cb37bdb31d245b33585a94615f92/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c40eb62bc4108a23a21d1fde53c7d4bf014cb37bdb31d245b33585a94615f92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:39 compute-0 podman[81116]: 2026-01-27 08:28:38.977558219 +0000 UTC m=+0.024367964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:39 compute-0 sudo[81160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring.new
Jan 27 08:28:39 compute-0 sudo[81160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 podman[81116]: 2026-01-27 08:28:39.082010788 +0000 UTC m=+0.128820543 container init 61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a (image=quay.io/ceph/ceph:v18, name=stupefied_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:28:39 compute-0 sudo[81160]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 podman[81116]: 2026-01-27 08:28:39.089831731 +0000 UTC m=+0.136641456 container start 61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a (image=quay.io/ceph/ceph:v18, name=stupefied_meitner, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:39 compute-0 podman[81116]: 2026-01-27 08:28:39.092844383 +0000 UTC m=+0.139654128 container attach 61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a (image=quay.io/ceph/ceph:v18, name=stupefied_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:28:39 compute-0 sudo[81189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:39 compute-0 sudo[81189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81189]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 sudo[81214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring.new
Jan 27 08:28:39 compute-0 sudo[81214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81214]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 sudo[81239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:39 compute-0 sudo[81239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81239]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 sudo[81264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring.new /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:28:39 compute-0 sudo[81264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81264]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:28:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:39 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 01289839-9a3c-4f7c-8411-a597282339e8 (Updating crash deployment (+1 -> 1))
Jan 27 08:28:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 27 08:28:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:28:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:28:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:39 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:39 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 27 08:28:39 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 27 08:28:39 compute-0 sudo[81305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:39 compute-0 sudo[81305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81305]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 sudo[81333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:39 compute-0 sudo[81333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81333]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 sudo[81358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:39 compute-0 sudo[81358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 sudo[81358]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:28:39 compute-0 stupefied_meitner[81166]: 
Jan 27 08:28:39 compute-0 stupefied_meitner[81166]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:28:39 compute-0 systemd[1]: libpod-61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a.scope: Deactivated successfully.
Jan 27 08:28:39 compute-0 podman[81116]: 2026-01-27 08:28:39.642048736 +0000 UTC m=+0.688858461 container died 61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a (image=quay.io/ceph/ceph:v18, name=stupefied_meitner, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:28:39 compute-0 sudo[81383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:39 compute-0 sudo[81383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c40eb62bc4108a23a21d1fde53c7d4bf014cb37bdb31d245b33585a94615f92-merged.mount: Deactivated successfully.
Jan 27 08:28:39 compute-0 podman[81116]: 2026-01-27 08:28:39.680104311 +0000 UTC m=+0.726914036 container remove 61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a (image=quay.io/ceph/ceph:v18, name=stupefied_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:28:39 compute-0 systemd[1]: libpod-conmon-61fd94eee26f248de58192a3d830e6a0f75f21db9b1bbedd9c60fdd377da2b1a.scope: Deactivated successfully.
Jan 27 08:28:39 compute-0 sudo[81043]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:39 compute-0 ceph-mon[74357]: Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:28:39 compute-0 ceph-mon[74357]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:39 compute-0 ceph-mon[74357]: Deploying daemon crash.compute-0 on compute-0
Jan 27 08:28:39 compute-0 ceph-mon[74357]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:40.003070282 +0000 UTC m=+0.044605964 container create 374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mestorf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:40 compute-0 systemd[1]: Started libpod-conmon-374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d.scope.
Jan 27 08:28:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:40 compute-0 sudo[81503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlvuqjpkmmudgjxhbbvnmgibslagkvp ; /usr/bin/python3'
Jan 27 08:28:40 compute-0 sudo[81503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:40.070929817 +0000 UTC m=+0.112465499 container init 374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:40.077338491 +0000 UTC m=+0.118874153 container start 374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mestorf, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:39.985023911 +0000 UTC m=+0.026559583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:28:40 compute-0 unruffled_mestorf[81504]: 167 167
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:40.082052209 +0000 UTC m=+0.123587891 container attach 374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 08:28:40 compute-0 systemd[1]: libpod-374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d.scope: Deactivated successfully.
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:40.083054176 +0000 UTC m=+0.124589838 container died 374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfff3ea6095b3e8e2b843f33099d5019abef760eb8bb3575709838800a2d6e17-merged.mount: Deactivated successfully.
Jan 27 08:28:40 compute-0 podman[81464]: 2026-01-27 08:28:40.12768239 +0000 UTC m=+0.169218052 container remove 374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mestorf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:28:40 compute-0 systemd[1]: libpod-conmon-374f2c5df7a11803e66079001fbef47c8c7cc40adcb1974a4833bd4b756f755d.scope: Deactivated successfully.
Jan 27 08:28:40 compute-0 systemd[1]: Reloading.
Jan 27 08:28:40 compute-0 systemd-rc-local-generator[81546]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:40 compute-0 systemd-sysv-generator[81554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:40 compute-0 python3[81508]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:40 compute-0 podman[81557]: 2026-01-27 08:28:40.293036936 +0000 UTC m=+0.041167660 container create 0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de (image=quay.io/ceph/ceph:v18, name=sleepy_solomon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:40 compute-0 podman[81557]: 2026-01-27 08:28:40.275987532 +0000 UTC m=+0.024118276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:40 compute-0 systemd[1]: Started libpod-conmon-0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de.scope.
Jan 27 08:28:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ac5bed24908034b0f461c4ed84ee153a41688cd57f1a2db0321475e235a0bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ac5bed24908034b0f461c4ed84ee153a41688cd57f1a2db0321475e235a0bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ac5bed24908034b0f461c4ed84ee153a41688cd57f1a2db0321475e235a0bd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 podman[81557]: 2026-01-27 08:28:40.451288129 +0000 UTC m=+0.199418863 container init 0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de (image=quay.io/ceph/ceph:v18, name=sleepy_solomon, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:28:40 compute-0 podman[81557]: 2026-01-27 08:28:40.460724015 +0000 UTC m=+0.208854749 container start 0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de (image=quay.io/ceph/ceph:v18, name=sleepy_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 08:28:40 compute-0 systemd[1]: Reloading.
Jan 27 08:28:40 compute-0 podman[81557]: 2026-01-27 08:28:40.464903159 +0000 UTC m=+0.213033883 container attach 0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de (image=quay.io/ceph/ceph:v18, name=sleepy_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:40 compute-0 systemd-sysv-generator[81617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:40 compute-0 systemd-rc-local-generator[81613]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:40 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:28:40 compute-0 ansible-async_wrapper.py[79716]: Done in kid B.
Jan 27 08:28:40 compute-0 podman[81688]: 2026-01-27 08:28:40.912118098 +0000 UTC m=+0.039861434 container create 7962a418399e154ddbb19f9a5999561a1283b1e453899928354d3dd6d501b889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4109a71ab887bd41fa735d3fe41a17f20ce9a35bb078f64d115ee0eeec7267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4109a71ab887bd41fa735d3fe41a17f20ce9a35bb078f64d115ee0eeec7267/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4109a71ab887bd41fa735d3fe41a17f20ce9a35bb078f64d115ee0eeec7267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 podman[81688]: 2026-01-27 08:28:40.889843783 +0000 UTC m=+0.017587129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4109a71ab887bd41fa735d3fe41a17f20ce9a35bb078f64d115ee0eeec7267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:40 compute-0 podman[81688]: 2026-01-27 08:28:40.997095479 +0000 UTC m=+0.124838825 container init 7962a418399e154ddbb19f9a5999561a1283b1e453899928354d3dd6d501b889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:28:41 compute-0 podman[81688]: 2026-01-27 08:28:41.00156591 +0000 UTC m=+0.129309236 container start 7962a418399e154ddbb19f9a5999561a1283b1e453899928354d3dd6d501b889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 27 08:28:41 compute-0 bash[81688]: 7962a418399e154ddbb19f9a5999561a1283b1e453899928354d3dd6d501b889
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1393027980' entity='client.admin' 
Jan 27 08:28:41 compute-0 systemd[1]: Started Ceph crash.compute-0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:28:41 compute-0 systemd[1]: libpod-0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de.scope: Deactivated successfully.
Jan 27 08:28:41 compute-0 podman[81557]: 2026-01-27 08:28:41.032202003 +0000 UTC m=+0.780332727 container died 0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de (image=quay.io/ceph/ceph:v18, name=sleepy_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:41 compute-0 sudo[81383]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2ac5bed24908034b0f461c4ed84ee153a41688cd57f1a2db0321475e235a0bd-merged.mount: Deactivated successfully.
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:28:41 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 01289839-9a3c-4f7c-8411-a597282339e8 (Updating crash deployment (+1 -> 1))
Jan 27 08:28:41 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 01289839-9a3c-4f7c-8411-a597282339e8 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f377530d-319a-4bb7-bb02-4a7260a069bb does not exist
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 27 08:28:41 compute-0 podman[81557]: 2026-01-27 08:28:41.083852478 +0000 UTC m=+0.831983202 container remove 0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de (image=quay.io/ceph/ceph:v18, name=sleepy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3083342b-d303-4c89-84ce-da9b5268168a does not exist
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 27 08:28:41 compute-0 systemd[1]: libpod-conmon-0252749cd71f23ce6ddce8c431e89abbdd47cb4d1e69160d2d143845a317e0de.scope: Deactivated successfully.
Jan 27 08:28:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:41 compute-0 sudo[81503]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 sudo[81723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:41 compute-0 sudo[81723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:41 compute-0 sudo[81723]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 sudo[81748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 27 08:28:41 compute-0 sudo[81748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:41 compute-0 sudo[81748]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 sudo[81821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urqvzmsuztpytqgdamynvuscklfigjei ; /usr/bin/python3'
Jan 27 08:28:41 compute-0 sudo[81821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:41 compute-0 sudo[81775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:41 compute-0 sudo[81775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:41 compute-0 sudo[81775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:41 compute-0 sudo[81826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:41 compute-0 sudo[81826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:41 compute-0 sudo[81826]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 sudo[81851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:41 compute-0 sudo[81851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:41 compute-0 sudo[81851]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: 2026-01-27T08:28:41.403+0000 7fdaa4380640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: 2026-01-27T08:28:41.403+0000 7fdaa4380640 -1 AuthRegistry(0x7fda9c066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: 2026-01-27T08:28:41.404+0000 7fdaa4380640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: 2026-01-27T08:28:41.404+0000 7fdaa4380640 -1 AuthRegistry(0x7fdaa437f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: 2026-01-27T08:28:41.405+0000 7fdaa20f5640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: 2026-01-27T08:28:41.406+0000 7fdaa4380640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 27 08:28:41 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-crash-compute-0[81703]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 27 08:28:41 compute-0 python3[81824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:41 compute-0 sudo[81877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:28:41 compute-0 sudo[81877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:41 compute-0 podman[81909]: 2026-01-27 08:28:41.527361057 +0000 UTC m=+0.063522049 container create a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829 (image=quay.io/ceph/ceph:v18, name=dazzling_morse, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:28:41 compute-0 systemd[1]: Started libpod-conmon-a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829.scope.
Jan 27 08:28:41 compute-0 podman[81909]: 2026-01-27 08:28:41.504759942 +0000 UTC m=+0.040920964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673b9e211542a294ef4410a7ddf2afbe0ff2218a9cb6c33cf2eb31f259982ba1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673b9e211542a294ef4410a7ddf2afbe0ff2218a9cb6c33cf2eb31f259982ba1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673b9e211542a294ef4410a7ddf2afbe0ff2218a9cb6c33cf2eb31f259982ba1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:41 compute-0 podman[81909]: 2026-01-27 08:28:41.636158345 +0000 UTC m=+0.172319367 container init a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829 (image=quay.io/ceph/ceph:v18, name=dazzling_morse, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:28:41 compute-0 podman[81909]: 2026-01-27 08:28:41.643754272 +0000 UTC m=+0.179915284 container start a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829 (image=quay.io/ceph/ceph:v18, name=dazzling_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:28:41 compute-0 podman[81909]: 2026-01-27 08:28:41.647319519 +0000 UTC m=+0.183480531 container attach a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829 (image=quay.io/ceph/ceph:v18, name=dazzling_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:28:41 compute-0 podman[82002]: 2026-01-27 08:28:41.91512551 +0000 UTC m=+0.059885979 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:42 compute-0 ceph-mon[74357]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1393027980' entity='client.admin' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 podman[82002]: 2026-01-27 08:28:42.033740725 +0000 UTC m=+0.178501184 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501982649' entity='client.admin' 
Jan 27 08:28:42 compute-0 sudo[81877]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 systemd[1]: libpod-a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829.scope: Deactivated successfully.
Jan 27 08:28:42 compute-0 podman[81909]: 2026-01-27 08:28:42.25457422 +0000 UTC m=+0.790735212 container died a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829 (image=quay.io/ceph/ceph:v18, name=dazzling_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f3d10ee0-e935-4944-98b1-f4bc28f63566 does not exist
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev fef8843a-18f3-4dd9-b090-d2b29bc51bfc does not exist
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 40773db3-48b9-43f5-8a14-be9ace44fe73 does not exist
Jan 27 08:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-673b9e211542a294ef4410a7ddf2afbe0ff2218a9cb6c33cf2eb31f259982ba1-merged.mount: Deactivated successfully.
Jan 27 08:28:42 compute-0 podman[81909]: 2026-01-27 08:28:42.300669883 +0000 UTC m=+0.836830875 container remove a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829 (image=quay.io/ceph/ceph:v18, name=dazzling_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:42 compute-0 sudo[81821]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 systemd[1]: libpod-conmon-a1722d6ec79e03170d9d009c28050e23ecdabdf85cc5c4cfa013af21c5b10829.scope: Deactivated successfully.
Jan 27 08:28:42 compute-0 sudo[82105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:42 compute-0 sudo[82105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:42 compute-0 sudo[82105]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 sudo[82130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:28:42 compute-0 sudo[82130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:42 compute-0 sudo[82130]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:28:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:28:42 compute-0 sudo[82155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:42 compute-0 sudo[82155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:42 compute-0 sudo[82155]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 sudo[82208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacmdvuedrncuycuwbemmajlnlegpzum ; /usr/bin/python3'
Jan 27 08:28:42 compute-0 sudo[82208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:42 compute-0 sudo[82201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:42 compute-0 sudo[82201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:42 compute-0 sudo[82201]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 sudo[82231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:42 compute-0 sudo[82231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:42 compute-0 sudo[82231]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:42 compute-0 python3[82223]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:42 compute-0 sudo[82256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:42 compute-0 sudo[82256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:42 compute-0 podman[82279]: 2026-01-27 08:28:42.725925316 +0000 UTC m=+0.038573700 container create 4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000 (image=quay.io/ceph/ceph:v18, name=blissful_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:28:42 compute-0 systemd[1]: Started libpod-conmon-4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000.scope.
Jan 27 08:28:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b94598a8167f55eb7f0b12fa14e66c4d2454c4bf19bc3641a45bf0ab8e3c237/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b94598a8167f55eb7f0b12fa14e66c4d2454c4bf19bc3641a45bf0ab8e3c237/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b94598a8167f55eb7f0b12fa14e66c4d2454c4bf19bc3641a45bf0ab8e3c237/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:42 compute-0 podman[82279]: 2026-01-27 08:28:42.787402247 +0000 UTC m=+0.100050641 container init 4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000 (image=quay.io/ceph/ceph:v18, name=blissful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:28:42 compute-0 podman[82279]: 2026-01-27 08:28:42.793895324 +0000 UTC m=+0.106543708 container start 4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000 (image=quay.io/ceph/ceph:v18, name=blissful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:28:42 compute-0 podman[82279]: 2026-01-27 08:28:42.79707452 +0000 UTC m=+0.109722904 container attach 4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000 (image=quay.io/ceph/ceph:v18, name=blissful_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:28:42 compute-0 podman[82279]: 2026-01-27 08:28:42.710761933 +0000 UTC m=+0.023410317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:42 compute-0 podman[82316]: 2026-01-27 08:28:42.90225622 +0000 UTC m=+0.034451978 container create 89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendel, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:28:42 compute-0 systemd[1]: Started libpod-conmon-89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851.scope.
Jan 27 08:28:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:42 compute-0 podman[82316]: 2026-01-27 08:28:42.964647596 +0000 UTC m=+0.096843384 container init 89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:28:42 compute-0 podman[82316]: 2026-01-27 08:28:42.969997722 +0000 UTC m=+0.102193480 container start 89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:28:42 compute-0 podman[82316]: 2026-01-27 08:28:42.973174508 +0000 UTC m=+0.105370266 container attach 89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:28:42 compute-0 kind_mendel[82333]: 167 167
Jan 27 08:28:42 compute-0 systemd[1]: libpod-89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851.scope: Deactivated successfully.
Jan 27 08:28:42 compute-0 podman[82316]: 2026-01-27 08:28:42.974460184 +0000 UTC m=+0.106655942 container died 89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:28:42 compute-0 podman[82316]: 2026-01-27 08:28:42.886595324 +0000 UTC m=+0.018791092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c536b102dde5d04db22d7e433be365c862ab8681107d376cf279a997623c01-merged.mount: Deactivated successfully.
Jan 27 08:28:43 compute-0 podman[82316]: 2026-01-27 08:28:43.008083178 +0000 UTC m=+0.140278976 container remove 89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:43 compute-0 systemd[1]: libpod-conmon-89bdf8ea6e7e6c1d53548ad6242f1e032b4dc3bac09fedb04afd9f35794f2851.scope: Deactivated successfully.
Jan 27 08:28:43 compute-0 sudo[82256]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vujqxq (unknown last config time)...
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vujqxq (unknown last config time)...
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vujqxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vujqxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vujqxq on compute-0
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vujqxq on compute-0
Jan 27 08:28:43 compute-0 sudo[82370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:43 compute-0 sudo[82370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:43 compute-0 sudo[82370]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:43 compute-0 sudo[82395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/501982649' entity='client.admin' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:43 compute-0 sudo[82395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vujqxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:43 compute-0 sudo[82395]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:43 compute-0 sudo[82420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:43 compute-0 sudo[82420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:43 compute-0 sudo[82420]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1785301263' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 27 08:28:43 compute-0 sudo[82445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:28:43 compute-0 sudo[82445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.561203687 +0000 UTC m=+0.038819197 container create 632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:43 compute-0 systemd[1]: Started libpod-conmon-632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742.scope.
Jan 27 08:28:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.626955314 +0000 UTC m=+0.104570834 container init 632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.633756209 +0000 UTC m=+0.111371729 container start 632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_meninsky, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:43 compute-0 tender_meninsky[82504]: 167 167
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.637428199 +0000 UTC m=+0.115043729 container attach 632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.542827056 +0000 UTC m=+0.020442576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:28:43 compute-0 systemd[1]: libpod-632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742.scope: Deactivated successfully.
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.63892946 +0000 UTC m=+0.116544960 container died 632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:28:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-428a002c01ed5761757817b40ef17ca26e508e46464bd5512f82ac016675ebbc-merged.mount: Deactivated successfully.
Jan 27 08:28:43 compute-0 podman[82488]: 2026-01-27 08:28:43.672366499 +0000 UTC m=+0.149981999 container remove 632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_meninsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:43 compute-0 systemd[1]: libpod-conmon-632a8a7d2cbb6f467fb9dc09a6ee79c0e6b5b6d3378d2de88b1aabe12c135742.scope: Deactivated successfully.
Jan 27 08:28:43 compute-0 sudo[82445]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:28:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev dc63dbdc-8512-4c95-a381-60ad1a8e4232 does not exist
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 73023303-fba4-4462-892a-a279e3206a20 does not exist
Jan 27 08:28:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 781e1694-9378-4fe3-a518-b2282df9219b does not exist
Jan 27 08:28:43 compute-0 sudo[82523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:43 compute-0 sudo[82523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:43 compute-0 sudo[82523]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:43 compute-0 sudo[82548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:28:43 compute-0 sudo[82548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:43 compute-0 sudo[82548]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 27 08:28:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:28:44 compute-0 ceph-mon[74357]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:44 compute-0 ceph-mon[74357]: Reconfiguring mgr.compute-0.vujqxq (unknown last config time)...
Jan 27 08:28:44 compute-0 ceph-mon[74357]: Reconfiguring daemon mgr.compute-0.vujqxq on compute-0
Jan 27 08:28:44 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1785301263' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 27 08:28:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1785301263' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 27 08:28:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 27 08:28:44 compute-0 blissful_neumann[82296]: set require_min_compat_client to mimic
Jan 27 08:28:44 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 27 08:28:44 compute-0 systemd[1]: libpod-4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000.scope: Deactivated successfully.
Jan 27 08:28:44 compute-0 conmon[82296]: conmon 4ff3b857a90dcad1f3fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000.scope/container/memory.events
Jan 27 08:28:44 compute-0 podman[82279]: 2026-01-27 08:28:44.262759193 +0000 UTC m=+1.575407577 container died 4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000 (image=quay.io/ceph/ceph:v18, name=blissful_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b94598a8167f55eb7f0b12fa14e66c4d2454c4bf19bc3641a45bf0ab8e3c237-merged.mount: Deactivated successfully.
Jan 27 08:28:44 compute-0 podman[82279]: 2026-01-27 08:28:44.303988204 +0000 UTC m=+1.616636588 container remove 4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000 (image=quay.io/ceph/ceph:v18, name=blissful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:44 compute-0 systemd[1]: libpod-conmon-4ff3b857a90dcad1f3fe9e0a22c0cabb8d1a96a6fee5e4e89cc73c140a1e2000.scope: Deactivated successfully.
Jan 27 08:28:44 compute-0 sudo[82208]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:44 compute-0 sudo[82608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebodqhsbupzhdcfmwmloxgumbuxwnfun ; /usr/bin/python3'
Jan 27 08:28:44 compute-0 sudo[82608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:44 compute-0 python3[82610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 1 completed events
Jan 27 08:28:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:28:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:45 compute-0 podman[82611]: 2026-01-27 08:28:45.019030234 +0000 UTC m=+0.044152811 container create 9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d (image=quay.io/ceph/ceph:v18, name=hungry_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:45 compute-0 systemd[1]: Started libpod-conmon-9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d.scope.
Jan 27 08:28:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e33a66a679cea4a97f86ea71b30c329ec70db3100bd7c18857c16417a6a17c1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e33a66a679cea4a97f86ea71b30c329ec70db3100bd7c18857c16417a6a17c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e33a66a679cea4a97f86ea71b30c329ec70db3100bd7c18857c16417a6a17c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:45 compute-0 podman[82611]: 2026-01-27 08:28:45.083079586 +0000 UTC m=+0.108202193 container init 9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d (image=quay.io/ceph/ceph:v18, name=hungry_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:28:45 compute-0 podman[82611]: 2026-01-27 08:28:45.088064091 +0000 UTC m=+0.113186668 container start 9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d (image=quay.io/ceph/ceph:v18, name=hungry_pike, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:45 compute-0 podman[82611]: 2026-01-27 08:28:45.091480134 +0000 UTC m=+0.116602721 container attach 9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d (image=quay.io/ceph/ceph:v18, name=hungry_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:45 compute-0 podman[82611]: 2026-01-27 08:28:44.999672368 +0000 UTC m=+0.024794955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:45 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1785301263' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 27 08:28:45 compute-0 ceph-mon[74357]: osdmap e3: 0 total, 0 up, 0 in
Jan 27 08:28:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:45 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:45 compute-0 sudo[82650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:45 compute-0 sudo[82650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:45 compute-0 sudo[82650]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:45 compute-0 sudo[82675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:28:45 compute-0 sudo[82675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:45 compute-0 sudo[82675]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:45 compute-0 sudo[82700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:45 compute-0 sudo[82700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:45 compute-0 sudo[82700]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:45 compute-0 sudo[82725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 27 08:28:45 compute-0 sudo[82725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:46 compute-0 sudo[82725]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mgr[74650]: [cephadm INFO root] Added host compute-0
Jan 27 08:28:46 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:28:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 09934d70-eca0-410e-a0a1-f124981b67d9 does not exist
Jan 27 08:28:46 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 11132e95-2bb8-481c-a24d-51d3f9413ba1 does not exist
Jan 27 08:28:46 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5ae37d4f-dc1f-45fa-8329-899cb3456a71 does not exist
Jan 27 08:28:46 compute-0 sudo[82771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:28:46 compute-0 sudo[82771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:46 compute-0 sudo[82771]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:46 compute-0 sudo[82796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:28:46 compute-0 sudo[82796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:28:46 compute-0 sudo[82796]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:46 compute-0 ceph-mon[74357]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:28:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:47 compute-0 ceph-mon[74357]: Added host compute-0
Jan 27 08:28:47 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 27 08:28:47 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 27 08:28:48 compute-0 ceph-mon[74357]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:48 compute-0 ceph-mon[74357]: Deploying cephadm binary to compute-1
Jan 27 08:28:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:50 compute-0 ceph-mon[74357]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:50 compute-0 ceph-mgr[74650]: [cephadm INFO root] Added host compute-1
Jan 27 08:28:50 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 27 08:28:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:28:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:28:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:51 compute-0 ceph-mon[74357]: Added host compute-1
Jan 27 08:28:51 compute-0 ceph-mon[74357]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:52 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 27 08:28:52 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 27 08:28:52 compute-0 ceph-mon[74357]: Deploying cephadm binary to compute-2
Jan 27 08:28:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:28:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:54 compute-0 ceph-mon[74357]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:56 compute-0 ceph-mon[74357]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 27 08:28:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: [cephadm INFO root] Added host compute-2
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 27 08:28:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 27 08:28:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 27 08:28:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:56 compute-0 hungry_pike[82626]: Added host 'compute-0' with addr '192.168.122.100'
Jan 27 08:28:56 compute-0 hungry_pike[82626]: Added host 'compute-1' with addr '192.168.122.101'
Jan 27 08:28:56 compute-0 hungry_pike[82626]: Added host 'compute-2' with addr '192.168.122.102'
Jan 27 08:28:56 compute-0 hungry_pike[82626]: Scheduled mon update...
Jan 27 08:28:56 compute-0 hungry_pike[82626]: Scheduled mgr update...
Jan 27 08:28:56 compute-0 hungry_pike[82626]: Scheduled osd.default_drive_group update...
Jan 27 08:28:56 compute-0 systemd[1]: libpod-9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d.scope: Deactivated successfully.
Jan 27 08:28:56 compute-0 podman[82611]: 2026-01-27 08:28:56.438344873 +0000 UTC m=+11.463467450 container died 9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d (image=quay.io/ceph/ceph:v18, name=hungry_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e33a66a679cea4a97f86ea71b30c329ec70db3100bd7c18857c16417a6a17c1-merged.mount: Deactivated successfully.
Jan 27 08:28:56 compute-0 podman[82611]: 2026-01-27 08:28:56.683610952 +0000 UTC m=+11.708733529 container remove 9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d (image=quay.io/ceph/ceph:v18, name=hungry_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:28:56 compute-0 systemd[1]: libpod-conmon-9a51b37a2c6e055bfe06cc2679727955615310f59a4ab539bbaf9e5a40e4a91d.scope: Deactivated successfully.
Jan 27 08:28:56 compute-0 sudo[82608]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:56 compute-0 sudo[82858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vumfsizdmsalstfvppmnmjwhuoiniilt ; /usr/bin/python3'
Jan 27 08:28:56 compute-0 sudo[82858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:28:57 compute-0 python3[82860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:57 compute-0 podman[82862]: 2026-01-27 08:28:57.183299969 +0000 UTC m=+0.060806695 container create 36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d (image=quay.io/ceph/ceph:v18, name=practical_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:28:57 compute-0 podman[82862]: 2026-01-27 08:28:57.150142547 +0000 UTC m=+0.027649303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:28:57 compute-0 systemd[1]: Started libpod-conmon-36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d.scope.
Jan 27 08:28:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee674fd49ce2edd32c1d45e4419f881768b70ff9ce14e6a406c2be738641b2c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee674fd49ce2edd32c1d45e4419f881768b70ff9ce14e6a406c2be738641b2c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee674fd49ce2edd32c1d45e4419f881768b70ff9ce14e6a406c2be738641b2c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:57 compute-0 podman[82862]: 2026-01-27 08:28:57.339107234 +0000 UTC m=+0.216613980 container init 36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d (image=quay.io/ceph/ceph:v18, name=practical_gauss, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:57 compute-0 podman[82862]: 2026-01-27 08:28:57.345768536 +0000 UTC m=+0.223275262 container start 36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d (image=quay.io/ceph/ceph:v18, name=practical_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:28:57 compute-0 podman[82862]: 2026-01-27 08:28:57.433645195 +0000 UTC m=+0.311151951 container attach 36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d (image=quay.io/ceph/ceph:v18, name=practical_gauss, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:57 compute-0 ceph-mon[74357]: Added host compute-2
Jan 27 08:28:57 compute-0 ceph-mon[74357]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:57 compute-0 ceph-mon[74357]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:57 compute-0 ceph-mon[74357]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 27 08:28:57 compute-0 ceph-mon[74357]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 27 08:28:57 compute-0 ceph-mon[74357]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 27 08:28:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:28:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 27 08:28:57 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416906649' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:28:57 compute-0 practical_gauss[82878]: 
Jan 27 08:28:57 compute-0 practical_gauss[82878]: {"fsid":"281e9bde-2795-59f4-98ac-90cf5b49a2de","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":91,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-27T08:27:23.062023+0000","services":{}},"progress_events":{}}
Jan 27 08:28:57 compute-0 systemd[1]: libpod-36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d.scope: Deactivated successfully.
Jan 27 08:28:57 compute-0 podman[82862]: 2026-01-27 08:28:57.992595472 +0000 UTC m=+0.870102238 container died 36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d (image=quay.io/ceph/ceph:v18, name=practical_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-aee674fd49ce2edd32c1d45e4419f881768b70ff9ce14e6a406c2be738641b2c-merged.mount: Deactivated successfully.
Jan 27 08:28:58 compute-0 podman[82862]: 2026-01-27 08:28:58.373021486 +0000 UTC m=+1.250528242 container remove 36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d (image=quay.io/ceph/ceph:v18, name=practical_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 08:28:58 compute-0 sudo[82858]: pam_unix(sudo:session): session closed for user root
Jan 27 08:28:58 compute-0 systemd[1]: libpod-conmon-36b4469b9d13006c9c020a3cd9b200d3b61905becb63e1e491b9dac529dc8c3d.scope: Deactivated successfully.
Jan 27 08:28:58 compute-0 ceph-mon[74357]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:28:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1416906649' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:28:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:00 compute-0 ceph-mon[74357]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:02 compute-0 ceph-mon[74357]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:04 compute-0 ceph-mon[74357]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:06 compute-0 ceph-mon[74357]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:08 compute-0 ceph-mon[74357]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:29:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:29:08 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 27 08:29:08 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 27 08:29:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:29:09 compute-0 ceph-mon[74357]: Updating compute-1:/etc/ceph/ceph.conf
Jan 27 08:29:09 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:29:09 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:29:10 compute-0 ceph-mon[74357]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:10 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:29:10 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:29:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:11 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:29:11 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:29:11 compute-0 ceph-mon[74357]: Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:29:11 compute-0 ceph-mon[74357]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:29:11 compute-0 ceph-mon[74357]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:29:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 23a9d4c6-8e33-4189-89bd-1fc0e07cffc8 (Updating crash deployment (+1 -> 2))
Jan 27 08:29:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 27 08:29:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:29:12.737+0000 7fe0eee58640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: service_name: mon
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: placement:
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   hosts:
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   - compute-0
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   - compute-1
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   - compute-2
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:29:12.738+0000 7fe0eee58640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: service_name: mgr
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: placement:
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   hosts:
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   - compute-0
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   - compute-1
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]:   - compute-2
Jan 27 08:29:12 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 27 08:29:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:29:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:12 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 27 08:29:12 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 27 08:29:13 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 27 08:29:13 compute-0 ceph-mon[74357]: Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:29:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:13 compute-0 ceph-mon[74357]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 27 08:29:13 compute-0 ceph-mon[74357]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:13 compute-0 ceph-mon[74357]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 27 08:29:13 compute-0 ceph-mon[74357]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:29:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:29:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:13 compute-0 ceph-mon[74357]: Deploying daemon crash.compute-1 on compute-1
Jan 27 08:29:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:14 compute-0 ceph-mon[74357]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 27 08:29:14 compute-0 ceph-mon[74357]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:29:14
Jan 27 08:29:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:29:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:29:14 compute-0 ceph-mgr[74650]: [balancer INFO root] No pools available
Jan 27 08:29:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 23a9d4c6-8e33-4189-89bd-1fc0e07cffc8 (Updating crash deployment (+1 -> 2))
Jan 27 08:29:15 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 23a9d4c6-8e33-4189-89bd-1fc0e07cffc8 (Updating crash deployment (+1 -> 2)) in 2 seconds
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:29:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:15 compute-0 sudo[82915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:15 compute-0 sudo[82915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:15 compute-0 sudo[82915]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:15 compute-0 sudo[82940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:15 compute-0 sudo[82940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:15 compute-0 sudo[82940]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:15 compute-0 sudo[82965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:15 compute-0 sudo[82965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:15 compute-0 sudo[82965]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:15 compute-0 sudo[82990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:29:15 compute-0 sudo[82990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.62055247 +0000 UTC m=+0.053005782 container create 9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldstine, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:15 compute-0 systemd[1]: Started libpod-conmon-9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18.scope.
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.589369492 +0000 UTC m=+0.021822814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.723319414 +0000 UTC m=+0.155772736 container init 9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.729467041 +0000 UTC m=+0.161920373 container start 9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:29:15 compute-0 heuristic_goldstine[83072]: 167 167
Jan 27 08:29:15 compute-0 systemd[1]: libpod-9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18.scope: Deactivated successfully.
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.741529999 +0000 UTC m=+0.173983311 container attach 9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldstine, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.742207798 +0000 UTC m=+0.174661090 container died 9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf436e9e19033b396eaa2adbfc34a53edbb65a3283f25c9ac1c2f872f4d6334-merged.mount: Deactivated successfully.
Jan 27 08:29:15 compute-0 podman[83056]: 2026-01-27 08:29:15.842906505 +0000 UTC m=+0.275359827 container remove 9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:29:15 compute-0 systemd[1]: libpod-conmon-9893269fd6772e4e59d27d4feecf55ae0e3c611aae3a3cf703fe9c4fcd7c8b18.scope: Deactivated successfully.
Jan 27 08:29:15 compute-0 podman[83098]: 2026-01-27 08:29:15.979829348 +0000 UTC m=+0.042624309 container create d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:29:16 compute-0 systemd[1]: Started libpod-conmon-d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b.scope.
Jan 27 08:29:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b866f2dedcea3b3fed9e9cc8ee21377689fc3817964ad37cad757dfcc0e5c599/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b866f2dedcea3b3fed9e9cc8ee21377689fc3817964ad37cad757dfcc0e5c599/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b866f2dedcea3b3fed9e9cc8ee21377689fc3817964ad37cad757dfcc0e5c599/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b866f2dedcea3b3fed9e9cc8ee21377689fc3817964ad37cad757dfcc0e5c599/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b866f2dedcea3b3fed9e9cc8ee21377689fc3817964ad37cad757dfcc0e5c599/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:29:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:16 compute-0 podman[83098]: 2026-01-27 08:29:15.958782247 +0000 UTC m=+0.021577228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:16 compute-0 podman[83098]: 2026-01-27 08:29:16.078230664 +0000 UTC m=+0.141025655 container init d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:29:16 compute-0 podman[83098]: 2026-01-27 08:29:16.08430189 +0000 UTC m=+0.147096851 container start d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:16 compute-0 podman[83098]: 2026-01-27 08:29:16.090284482 +0000 UTC m=+0.153079443 container attach d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:16 compute-0 youthful_galois[83115]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:29:16 compute-0 youthful_galois[83115]: --> relative data size: 1.0
Jan 27 08:29:16 compute-0 youthful_galois[83115]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:29:16 compute-0 youthful_galois[83115]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c06a7c81-ab3c-42b8-812f-79473670be30
Jan 27 08:29:17 compute-0 ceph-mon[74357]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c06a7c81-ab3c-42b8-812f-79473670be30"} v 0) v1
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2374612996' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c06a7c81-ab3c-42b8-812f-79473670be30"}]: dispatch
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "136e491a-dc93-436b-bcbd-5d7dc65ecb4a"} v 0) v1
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2728142001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "136e491a-dc93-436b-bcbd-5d7dc65ecb4a"}]: dispatch
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2374612996' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c06a7c81-ab3c-42b8-812f-79473670be30"}]': finished
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:17 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2728142001' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "136e491a-dc93-436b-bcbd-5d7dc65ecb4a"}]': finished
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:17 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:17 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:17 compute-0 youthful_galois[83115]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:29:17 compute-0 youthful_galois[83115]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 27 08:29:17 compute-0 lvm[83163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:29:17 compute-0 lvm[83163]: VG ceph_vg0 finished
Jan 27 08:29:17 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 27 08:29:17 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:29:17 compute-0 youthful_galois[83115]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:17 compute-0 youthful_galois[83115]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 27 08:29:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 27 08:29:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440986014' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 27 08:29:18 compute-0 youthful_galois[83115]:  stderr: got monmap epoch 1
Jan 27 08:29:18 compute-0 youthful_galois[83115]: --> Creating keyring file for osd.0
Jan 27 08:29:18 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 27 08:29:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 27 08:29:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2061201988' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 27 08:29:18 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 27 08:29:18 compute-0 youthful_galois[83115]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c06a7c81-ab3c-42b8-812f-79473670be30 --setuser ceph --setgroup ceph
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2374612996' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c06a7c81-ab3c-42b8-812f-79473670be30"}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2728142001' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "136e491a-dc93-436b-bcbd-5d7dc65ecb4a"}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2374612996' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c06a7c81-ab3c-42b8-812f-79473670be30"}]': finished
Jan 27 08:29:18 compute-0 ceph-mon[74357]: osdmap e4: 1 total, 0 up, 1 in
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2728142001' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "136e491a-dc93-436b-bcbd-5d7dc65ecb4a"}]': finished
Jan 27 08:29:18 compute-0 ceph-mon[74357]: osdmap e5: 2 total, 0 up, 2 in
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/440986014' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2061201988' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 27 08:29:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:19 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 27 08:29:19 compute-0 ceph-mon[74357]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:20 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 2 completed events
Jan 27 08:29:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:29:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:20 compute-0 ceph-mon[74357]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 27 08:29:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:21 compute-0 ceph-mon[74357]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:22 compute-0 youthful_galois[83115]:  stderr: 2026-01-27T08:29:18.156+0000 7feffed94740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 27 08:29:22 compute-0 youthful_galois[83115]:  stderr: 2026-01-27T08:29:18.156+0000 7feffed94740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 27 08:29:22 compute-0 youthful_galois[83115]:  stderr: 2026-01-27T08:29:18.156+0000 7feffed94740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 27 08:29:22 compute-0 youthful_galois[83115]:  stderr: 2026-01-27T08:29:18.156+0000 7feffed94740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 27 08:29:22 compute-0 youthful_galois[83115]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 27 08:29:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 27 08:29:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 27 08:29:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:22 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:22 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 27 08:29:22 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 27 08:29:22 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:29:22 compute-0 youthful_galois[83115]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 27 08:29:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 27 08:29:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:22 compute-0 youthful_galois[83115]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:22 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:22 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:29:22 compute-0 youthful_galois[83115]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:29:22 compute-0 youthful_galois[83115]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 27 08:29:22 compute-0 youthful_galois[83115]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 27 08:29:22 compute-0 systemd[1]: libpod-d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b.scope: Deactivated successfully.
Jan 27 08:29:22 compute-0 systemd[1]: libpod-d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b.scope: Consumed 2.364s CPU time.
Jan 27 08:29:22 compute-0 podman[84087]: 2026-01-27 08:29:22.723792996 +0000 UTC m=+0.024615441 container died d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b866f2dedcea3b3fed9e9cc8ee21377689fc3817964ad37cad757dfcc0e5c599-merged.mount: Deactivated successfully.
Jan 27 08:29:22 compute-0 podman[84087]: 2026-01-27 08:29:22.887688212 +0000 UTC m=+0.188510677 container remove d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:29:22 compute-0 systemd[1]: libpod-conmon-d6241401056dd7bb5d36bdb4e2c853489375c6af0d978366d35a623fe65bae5b.scope: Deactivated successfully.
Jan 27 08:29:22 compute-0 sudo[82990]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:22 compute-0 sudo[84102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:22 compute-0 sudo[84102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:22 compute-0 sudo[84102]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:23 compute-0 sudo[84127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:23 compute-0 sudo[84127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:23 compute-0 sudo[84127]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:23 compute-0 sudo[84152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:23 compute-0 sudo[84152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:23 compute-0 sudo[84152]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:23 compute-0 sudo[84177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:29:23 compute-0 sudo[84177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.452439037 +0000 UTC m=+0.076649146 container create 540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamarr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.397430341 +0000 UTC m=+0.021640450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:23 compute-0 systemd[1]: Started libpod-conmon-540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8.scope.
Jan 27 08:29:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.577737014 +0000 UTC m=+0.201947133 container init 540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.590403478 +0000 UTC m=+0.214613587 container start 540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:23 compute-0 gifted_lamarr[84257]: 167 167
Jan 27 08:29:23 compute-0 systemd[1]: libpod-540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8.scope: Deactivated successfully.
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.622990754 +0000 UTC m=+0.247200873 container attach 540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.623323123 +0000 UTC m=+0.247533232 container died 540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:23 compute-0 ceph-mon[74357]: Deploying daemon osd.1 on compute-1
Jan 27 08:29:23 compute-0 ceph-mon[74357]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e79270a26e0b5e52692f92ec3f92fe9c5fc87a9ab5f9ac8aa20447738f4d17-merged.mount: Deactivated successfully.
Jan 27 08:29:23 compute-0 podman[84241]: 2026-01-27 08:29:23.945306428 +0000 UTC m=+0.569516527 container remove 540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:29:23 compute-0 systemd[1]: libpod-conmon-540b7daa11a183cd075cbc5eadabc81c43fe29e0d1fd7c1c547313106f686ef8.scope: Deactivated successfully.
Jan 27 08:29:24 compute-0 podman[84280]: 2026-01-27 08:29:24.139093328 +0000 UTC m=+0.087864671 container create b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shannon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 27 08:29:24 compute-0 podman[84280]: 2026-01-27 08:29:24.075099007 +0000 UTC m=+0.023870340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:24 compute-0 systemd[1]: Started libpod-conmon-b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544.scope.
Jan 27 08:29:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4ac224b340947d74ede23b4f04dbae1edba3b6d46667482cd37f0d416566b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4ac224b340947d74ede23b4f04dbae1edba3b6d46667482cd37f0d416566b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4ac224b340947d74ede23b4f04dbae1edba3b6d46667482cd37f0d416566b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd4ac224b340947d74ede23b4f04dbae1edba3b6d46667482cd37f0d416566b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:24 compute-0 podman[84280]: 2026-01-27 08:29:24.35585993 +0000 UTC m=+0.304631263 container init b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:29:24 compute-0 podman[84280]: 2026-01-27 08:29:24.36501041 +0000 UTC m=+0.313781723 container start b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:29:24 compute-0 podman[84280]: 2026-01-27 08:29:24.420255101 +0000 UTC m=+0.369026434 container attach b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:24 compute-0 ceph-mon[74357]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:25 compute-0 quirky_shannon[84296]: {
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:     "0": [
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:         {
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "devices": [
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "/dev/loop3"
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             ],
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "lv_name": "ceph_lv0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "lv_size": "7511998464",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "name": "ceph_lv0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "tags": {
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.cluster_name": "ceph",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.crush_device_class": "",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.encrypted": "0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.osd_id": "0",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.type": "block",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:                 "ceph.vdo": "0"
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             },
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "type": "block",
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:             "vg_name": "ceph_vg0"
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:         }
Jan 27 08:29:25 compute-0 quirky_shannon[84296]:     ]
Jan 27 08:29:25 compute-0 quirky_shannon[84296]: }
Jan 27 08:29:25 compute-0 systemd[1]: libpod-b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544.scope: Deactivated successfully.
Jan 27 08:29:25 compute-0 podman[84280]: 2026-01-27 08:29:25.167717605 +0000 UTC m=+1.116488928 container died b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd4ac224b340947d74ede23b4f04dbae1edba3b6d46667482cd37f0d416566b-merged.mount: Deactivated successfully.
Jan 27 08:29:25 compute-0 podman[84280]: 2026-01-27 08:29:25.223633115 +0000 UTC m=+1.172404428 container remove b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:25 compute-0 systemd[1]: libpod-conmon-b1a2776a9f2a1dcd836b9379d910a2c7994bf9706c30e3bc450b7c5cbe641544.scope: Deactivated successfully.
Jan 27 08:29:25 compute-0 sudo[84177]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 27 08:29:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 27 08:29:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:25 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:25 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 27 08:29:25 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 27 08:29:25 compute-0 sudo[84316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:25 compute-0 sudo[84316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:25 compute-0 sudo[84316]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:25 compute-0 sudo[84341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:25 compute-0 sudo[84341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:25 compute-0 sudo[84341]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:25 compute-0 sudo[84366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:25 compute-0 sudo[84366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:25 compute-0 sudo[84366]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:25 compute-0 sudo[84391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:29:25 compute-0 sudo[84391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.778591374 +0000 UTC m=+0.036822152 container create d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 08:29:25 compute-0 systemd[1]: Started libpod-conmon-d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc.scope.
Jan 27 08:29:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 27 08:29:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:25 compute-0 ceph-mon[74357]: Deploying daemon osd.0 on compute-0
Jan 27 08:29:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.846236433 +0000 UTC m=+0.104467231 container init d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kalam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.85200818 +0000 UTC m=+0.110238958 container start d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:29:25 compute-0 awesome_kalam[84474]: 167 167
Jan 27 08:29:25 compute-0 systemd[1]: libpod-d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc.scope: Deactivated successfully.
Jan 27 08:29:25 compute-0 conmon[84474]: conmon d8fa1e7cfcd8d7802c4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc.scope/container/memory.events
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.856944025 +0000 UTC m=+0.115174803 container attach d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.857620213 +0000 UTC m=+0.115850991 container died d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.764187123 +0000 UTC m=+0.022417921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f578ef04274307074135396919e64a0f2b1e5754867b1b14ce29736858bc9225-merged.mount: Deactivated successfully.
Jan 27 08:29:25 compute-0 podman[84456]: 2026-01-27 08:29:25.891556625 +0000 UTC m=+0.149787403 container remove d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kalam, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:25 compute-0 systemd[1]: libpod-conmon-d8fa1e7cfcd8d7802c4f84c735cebcca9ce89ee3d09928d1ca039b29a76d2dfc.scope: Deactivated successfully.
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.107322972 +0000 UTC m=+0.037315565 container create 7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:29:26 compute-0 systemd[1]: Started libpod-conmon-7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca.scope.
Jan 27 08:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce57d2bbc557a93320f08db4ba7b52e65c0311e814672df2b47ac1b9094134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce57d2bbc557a93320f08db4ba7b52e65c0311e814672df2b47ac1b9094134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce57d2bbc557a93320f08db4ba7b52e65c0311e814672df2b47ac1b9094134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce57d2bbc557a93320f08db4ba7b52e65c0311e814672df2b47ac1b9094134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce57d2bbc557a93320f08db4ba7b52e65c0311e814672df2b47ac1b9094134/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.092499329 +0000 UTC m=+0.022491942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.210371454 +0000 UTC m=+0.140364087 container init 7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.218158616 +0000 UTC m=+0.148151209 container start 7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.221112786 +0000 UTC m=+0.151105429 container attach 7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:29:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:26 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test[84520]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 27 08:29:26 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test[84520]:                             [--no-systemd] [--no-tmpfs]
Jan 27 08:29:26 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test[84520]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 27 08:29:26 compute-0 systemd[1]: libpod-7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca.scope: Deactivated successfully.
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.871600593 +0000 UTC m=+0.801593186 container died 7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ce57d2bbc557a93320f08db4ba7b52e65c0311e814672df2b47ac1b9094134-merged.mount: Deactivated successfully.
Jan 27 08:29:26 compute-0 podman[84505]: 2026-01-27 08:29:26.954140197 +0000 UTC m=+0.884132800 container remove 7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 08:29:26 compute-0 systemd[1]: libpod-conmon-7b64e088c72e8e7eb00ced18fc7a24cc5b0bf667e2f084563ae6c579b32effca.scope: Deactivated successfully.
Jan 27 08:29:27 compute-0 systemd[1]: Reloading.
Jan 27 08:29:27 compute-0 systemd-rc-local-generator[84580]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:29:27 compute-0 systemd-sysv-generator[84585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:29:27 compute-0 systemd[1]: Reloading.
Jan 27 08:29:27 compute-0 systemd-sysv-generator[84627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:29:27 compute-0 systemd-rc-local-generator[84623]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:29:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:27 compute-0 ceph-mon[74357]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:27 compute-0 systemd[1]: Starting Ceph osd.0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:29:27 compute-0 podman[84678]: 2026-01-27 08:29:27.925784446 +0000 UTC m=+0.058274355 container create 1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:29:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdcfe0e01605585fe867fa96769d05b0d483daf483dae2ebe73310313ff0af3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdcfe0e01605585fe867fa96769d05b0d483daf483dae2ebe73310313ff0af3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdcfe0e01605585fe867fa96769d05b0d483daf483dae2ebe73310313ff0af3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdcfe0e01605585fe867fa96769d05b0d483daf483dae2ebe73310313ff0af3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdcfe0e01605585fe867fa96769d05b0d483daf483dae2ebe73310313ff0af3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:27 compute-0 podman[84678]: 2026-01-27 08:29:27.989799936 +0000 UTC m=+0.122289855 container init 1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:29:27 compute-0 podman[84678]: 2026-01-27 08:29:27.8994742 +0000 UTC m=+0.031964189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:27 compute-0 podman[84678]: 2026-01-27 08:29:27.99802452 +0000 UTC m=+0.130514419 container start 1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:29:28 compute-0 podman[84678]: 2026-01-27 08:29:28.001536276 +0000 UTC m=+0.134026185 container attach 1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 27 08:29:28 compute-0 sudo[84723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svgfawciyouztsodowhxenkhasrriqco ; /usr/bin/python3'
Jan 27 08:29:28 compute-0 sudo[84723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:28 compute-0 python3[84725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:28 compute-0 podman[84727]: 2026-01-27 08:29:28.697958271 +0000 UTC m=+0.039621958 container create d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073 (image=quay.io/ceph/ceph:v18, name=recursing_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:28 compute-0 ceph-mon[74357]: from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 27 08:29:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:28 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 27 08:29:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 27 08:29:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 27 08:29:28 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:28 compute-0 systemd[1]: Started libpod-conmon-d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073.scope.
Jan 27 08:29:28 compute-0 podman[84727]: 2026-01-27 08:29:28.681403551 +0000 UTC m=+0.023067258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:29:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5741f677b4f17408fb2f8046ef0c36f84d86d2e071e38876ebcbc7e77a6bce36/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5741f677b4f17408fb2f8046ef0c36f84d86d2e071e38876ebcbc7e77a6bce36/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5741f677b4f17408fb2f8046ef0c36f84d86d2e071e38876ebcbc7e77a6bce36/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:28 compute-0 podman[84727]: 2026-01-27 08:29:28.79869232 +0000 UTC m=+0.140356037 container init d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073 (image=quay.io/ceph/ceph:v18, name=recursing_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:28 compute-0 podman[84727]: 2026-01-27 08:29:28.808014894 +0000 UTC m=+0.149678591 container start d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073 (image=quay.io/ceph/ceph:v18, name=recursing_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:28 compute-0 podman[84727]: 2026-01-27 08:29:28.813336288 +0000 UTC m=+0.154999985 container attach d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073 (image=quay.io/ceph/ceph:v18, name=recursing_neumann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:29:28 compute-0 bash[84678]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 27 08:29:28 compute-0 bash[84678]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 27 08:29:28 compute-0 bash[84678]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:29:28 compute-0 bash[84678]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:28 compute-0 bash[84678]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:29:28 compute-0 bash[84678]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:29:28 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate[84695]: --> ceph-volume raw activate successful for osd ID: 0
Jan 27 08:29:28 compute-0 bash[84678]: --> ceph-volume raw activate successful for osd ID: 0
Jan 27 08:29:28 compute-0 systemd[1]: libpod-1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a.scope: Deactivated successfully.
Jan 27 08:29:28 compute-0 systemd[1]: libpod-1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a.scope: Consumed 1.003s CPU time.
Jan 27 08:29:29 compute-0 podman[84678]: 2026-01-27 08:29:28.999952052 +0000 UTC m=+1.132441971 container died 1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fdcfe0e01605585fe867fa96769d05b0d483daf483dae2ebe73310313ff0af3-merged.mount: Deactivated successfully.
Jan 27 08:29:29 compute-0 podman[84678]: 2026-01-27 08:29:29.0579589 +0000 UTC m=+1.190448799 container remove 1100857410d9a872fe440a83b8779aa131f17d6937254e184af7093d63c20c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:29:29 compute-0 podman[84931]: 2026-01-27 08:29:29.243158725 +0000 UTC m=+0.044010478 container create 46a0a8c9f96b7d89c556725a9c3d74bea40a39fd5c7fcd66006696d4a640d3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81735792b450004bc29f49fd59fc89ed135d9d99961a310ab3cbe71c28b2b18a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81735792b450004bc29f49fd59fc89ed135d9d99961a310ab3cbe71c28b2b18a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81735792b450004bc29f49fd59fc89ed135d9d99961a310ab3cbe71c28b2b18a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81735792b450004bc29f49fd59fc89ed135d9d99961a310ab3cbe71c28b2b18a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81735792b450004bc29f49fd59fc89ed135d9d99961a310ab3cbe71c28b2b18a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:29 compute-0 podman[84931]: 2026-01-27 08:29:29.30808912 +0000 UTC m=+0.108940863 container init 46a0a8c9f96b7d89c556725a9c3d74bea40a39fd5c7fcd66006696d4a640d3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:29 compute-0 podman[84931]: 2026-01-27 08:29:29.312568453 +0000 UTC m=+0.113420176 container start 46a0a8c9f96b7d89c556725a9c3d74bea40a39fd5c7fcd66006696d4a640d3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:29 compute-0 bash[84931]: 46a0a8c9f96b7d89c556725a9c3d74bea40a39fd5c7fcd66006696d4a640d3c0
Jan 27 08:29:29 compute-0 podman[84931]: 2026-01-27 08:29:29.222105052 +0000 UTC m=+0.022956865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:29 compute-0 systemd[1]: Started Ceph osd.0 for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:29:29 compute-0 ceph-osd[84951]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:29:29 compute-0 ceph-osd[84951]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 27 08:29:29 compute-0 ceph-osd[84951]: pidfile_write: ignore empty --pid-file
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558692fdf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558692fdf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558692fdf800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558692fdf800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e17800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e17800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e17800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e17800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e17800 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:29:29 compute-0 sudo[84391]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:29 compute-0 sudo[84964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:29 compute-0 sudo[84964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:29 compute-0 sudo[84964]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317749304' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:29:29 compute-0 recursing_neumann[84748]: 
Jan 27 08:29:29 compute-0 recursing_neumann[84748]: {"fsid":"281e9bde-2795-59f4-98ac-90cf5b49a2de","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":123,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769502557,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-27T08:29:16.741143+0000","services":{}},"progress_events":{}}
Jan 27 08:29:29 compute-0 systemd[1]: libpod-d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073.scope: Deactivated successfully.
Jan 27 08:29:29 compute-0 podman[84727]: 2026-01-27 08:29:29.475741779 +0000 UTC m=+0.817405506 container died d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073 (image=quay.io/ceph/ceph:v18, name=recursing_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:29:29 compute-0 sudo[84989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:29 compute-0 sudo[84989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5741f677b4f17408fb2f8046ef0c36f84d86d2e071e38876ebcbc7e77a6bce36-merged.mount: Deactivated successfully.
Jan 27 08:29:29 compute-0 sudo[84989]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:29 compute-0 podman[84727]: 2026-01-27 08:29:29.522062198 +0000 UTC m=+0.863725895 container remove d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073 (image=quay.io/ceph/ceph:v18, name=recursing_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:29 compute-0 systemd[1]: libpod-conmon-d444e2275d1355126d98463b3798ede2fe78542dddd0abb48ad22757a834c073.scope: Deactivated successfully.
Jan 27 08:29:29 compute-0 sudo[84723]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:29 compute-0 sudo[85026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:29 compute-0 sudo[85026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:29 compute-0 sudo[85026]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:29 compute-0 sudo[85051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:29:29 compute-0 sudo[85051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558692fdf800 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:29 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 27 08:29:29 compute-0 ceph-mon[74357]: osdmap e6: 2 total, 0 up, 2 in
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mon[74357]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/317749304' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1541991168; not ready for session (expect reconnect)
Jan 27 08:29:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:29 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:29 compute-0 ceph-osd[84951]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 27 08:29:29 compute-0 ceph-osd[84951]: load: jerasure load: lrc 
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:29:29 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:29:29 compute-0 podman[85120]: 2026-01-27 08:29:29.914429537 +0000 UTC m=+0.041384506 container create b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goodall, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:29:29 compute-0 systemd[1]: Started libpod-conmon-b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899.scope.
Jan 27 08:29:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:29 compute-0 podman[85120]: 2026-01-27 08:29:29.898148154 +0000 UTC m=+0.025103143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:29 compute-0 podman[85120]: 2026-01-27 08:29:29.996455557 +0000 UTC m=+0.123410556 container init b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:30 compute-0 podman[85120]: 2026-01-27 08:29:30.003894519 +0000 UTC m=+0.130849488 container start b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goodall, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:29:30 compute-0 gifted_goodall[85142]: 167 167
Jan 27 08:29:30 compute-0 systemd[1]: libpod-b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899.scope: Deactivated successfully.
Jan 27 08:29:30 compute-0 conmon[85142]: conmon b03bd10aeb7a99424254 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899.scope/container/memory.events
Jan 27 08:29:30 compute-0 podman[85120]: 2026-01-27 08:29:30.009264575 +0000 UTC m=+0.136219544 container attach b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goodall, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:29:30 compute-0 podman[85120]: 2026-01-27 08:29:30.010134429 +0000 UTC m=+0.137089418 container died b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd7afcb1a670b1201577a50f865c2a8c4cbc7c28a372ce597098b1fc24342029-merged.mount: Deactivated successfully.
Jan 27 08:29:30 compute-0 podman[85120]: 2026-01-27 08:29:30.057998541 +0000 UTC m=+0.184953510 container remove b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:29:30 compute-0 systemd[1]: libpod-conmon-b03bd10aeb7a99424254b6bdc998e2ff040ce50d01dcb0c3b71aa288aa315899.scope: Deactivated successfully.
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:29:30 compute-0 ceph-osd[84951]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e98c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs mount
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs mount shared_bdev_used = 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Git sha 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DB SUMMARY
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DB Session ID:  0NL0MHU8IB7NKVLC5OPI
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                     Options.env: 0x558693e69c70
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                Options.info_log: 0x55869305cba0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.write_buffer_manager: 0x558693f72460
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.row_cache: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                              Options.wal_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.wal_compression: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Compression algorithms supported:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kZSTD supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kXpressCompression supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kBZip2Compression supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kLZ4Compression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kZlibCompression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kLZ4HCCompression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kSnappyCompression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 podman[85166]: 2026-01-27 08:29:30.220491269 +0000 UTC m=+0.051263096 container create 3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jackson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693052430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a2e8fb80-ddd5-4d3b-9b16-12cf9e3d7e67
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570226545, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570226748, "job": 1, "event": "recovery_finished"}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: freelist init
Jan 27 08:29:30 compute-0 ceph-osd[84951]: freelist _read_cfg
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs umount
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:29:30 compute-0 systemd[1]: Started libpod-conmon-3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b.scope.
Jan 27 08:29:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:30 compute-0 podman[85166]: 2026-01-27 08:29:30.196029203 +0000 UTC m=+0.026801030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf096ac231fee4af37e848ff89da064943f85649f0ce3237ec1140c306cc14e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf096ac231fee4af37e848ff89da064943f85649f0ce3237ec1140c306cc14e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf096ac231fee4af37e848ff89da064943f85649f0ce3237ec1140c306cc14e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf096ac231fee4af37e848ff89da064943f85649f0ce3237ec1140c306cc14e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:30 compute-0 podman[85166]: 2026-01-27 08:29:30.303199137 +0000 UTC m=+0.133970974 container init 3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:29:30 compute-0 podman[85166]: 2026-01-27 08:29:30.31029212 +0000 UTC m=+0.141063947 container start 3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jackson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:29:30 compute-0 podman[85166]: 2026-01-27 08:29:30.313757524 +0000 UTC m=+0.144529381 container attach 3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bdev(0x558693e99400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs mount
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluefs mount shared_bdev_used = 4718592
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Git sha 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DB SUMMARY
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DB Session ID:  0NL0MHU8IB7NKVLC5OPJ
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                     Options.env: 0x55869309e690
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                Options.info_log: 0x558693066380
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.write_buffer_manager: 0x558693f72460
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.row_cache: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                              Options.wal_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.wal_compression: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Compression algorithms supported:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kZSTD supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kXpressCompression supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kBZip2Compression supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kLZ4Compression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kZlibCompression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kLZ4HCCompression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         kSnappyCompression supported: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305c2a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305dea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305dea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:           Options.merge_operator: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55869305dea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558693053770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.compression: LZ4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.num_levels: 7
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a2e8fb80-ddd5-4d3b-9b16-12cf9e3d7e67
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570506406, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570516130, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502570, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2e8fb80-ddd5-4d3b-9b16-12cf9e3d7e67", "db_session_id": "0NL0MHU8IB7NKVLC5OPJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570518601, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502570, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2e8fb80-ddd5-4d3b-9b16-12cf9e3d7e67", "db_session_id": "0NL0MHU8IB7NKVLC5OPJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570521281, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502570, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2e8fb80-ddd5-4d3b-9b16-12cf9e3d7e67", "db_session_id": "0NL0MHU8IB7NKVLC5OPJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502570522518, "job": 1, "event": "recovery_finished"}
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558693125c00
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: DB pointer 0x558693f5ba00
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 27 08:29:30 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:29:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 08:29:30 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 27 08:29:30 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 27 08:29:30 compute-0 ceph-osd[84951]: _get_class not permitted to load lua
Jan 27 08:29:30 compute-0 ceph-osd[84951]: _get_class not permitted to load sdk
Jan 27 08:29:30 compute-0 ceph-osd[84951]: _get_class not permitted to load test_remote_reads
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 load_pgs
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 load_pgs opened 0 pgs
Jan 27 08:29:30 compute-0 ceph-osd[84951]: osd.0 0 log_to_monitors true
Jan 27 08:29:30 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0[84947]: 2026-01-27T08:29:30.547+0000 7fd404dda740 -1 osd.0 0 log_to_monitors true
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:30 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1541991168; not ready for session (expect reconnect)
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 27 08:29:30 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:30 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:30 compute-0 ceph-mon[74357]: from='osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 27 08:29:30 compute-0 ceph-mon[74357]: osdmap e7: 2 total, 0 up, 2 in
Jan 27 08:29:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:30 compute-0 ceph-mon[74357]: from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 27 08:29:31 compute-0 cranky_jackson[85382]: {
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:         "osd_id": 0,
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:         "type": "bluestore"
Jan 27 08:29:31 compute-0 cranky_jackson[85382]:     }
Jan 27 08:29:31 compute-0 cranky_jackson[85382]: }
Jan 27 08:29:31 compute-0 systemd[1]: libpod-3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b.scope: Deactivated successfully.
Jan 27 08:29:31 compute-0 podman[85166]: 2026-01-27 08:29:31.161813623 +0000 UTC m=+0.992585480 container died 3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf096ac231fee4af37e848ff89da064943f85649f0ce3237ec1140c306cc14e8-merged.mount: Deactivated successfully.
Jan 27 08:29:31 compute-0 podman[85166]: 2026-01-27 08:29:31.214442634 +0000 UTC m=+1.045214461 container remove 3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:29:31 compute-0 systemd[1]: libpod-conmon-3c39128b824de0597866c0f5d8a00b6ac218a339aef595b279e62f4aaae8212b.scope: Deactivated successfully.
Jan 27 08:29:31 compute-0 sudo[85051]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:31 compute-0 sudo[85631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:31 compute-0 sudo[85631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:31 compute-0 sudo[85631]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:31 compute-0 sudo[85656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:29:31 compute-0 sudo[85656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:31 compute-0 sudo[85656]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:31 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 27 08:29:31 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 27 08:29:31 compute-0 sudo[85681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:31 compute-0 sudo[85681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:31 compute-0 sudo[85681]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:31 compute-0 sudo[85706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:31 compute-0 sudo[85706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:31 compute-0 sudo[85706]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:31 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1541991168; not ready for session (expect reconnect)
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0 done with init, starting boot process
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0 start_boot
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 27 08:29:31 compute-0 ceph-osd[84951]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:31 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:31 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2506616633; not ready for session (expect reconnect)
Jan 27 08:29:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:31 compute-0 ceph-mon[74357]: purged_snaps scrub starts
Jan 27 08:29:31 compute-0 ceph-mon[74357]: purged_snaps scrub ok
Jan 27 08:29:31 compute-0 ceph-mon[74357]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 27 08:29:31 compute-0 ceph-mon[74357]: osdmap e8: 2 total, 0 up, 2 in
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:29:31 compute-0 ceph-mon[74357]: osdmap e9: 2 total, 0 up, 2 in
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:31 compute-0 sudo[85731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:31 compute-0 sudo[85731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:31 compute-0 sudo[85731]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:31 compute-0 sudo[85756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:29:31 compute-0 sudo[85756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 podman[85853]: 2026-01-27 08:29:32.31249257 +0000 UTC m=+0.068344390 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:32 compute-0 podman[85853]: 2026-01-27 08:29:32.465223152 +0000 UTC m=+0.221074942 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:32 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1541991168; not ready for session (expect reconnect)
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:32 compute-0 sudo[85756]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:32 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:29:32 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2506616633; not ready for session (expect reconnect)
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:32 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:32 compute-0 sudo[85937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:32 compute-0 sudo[85937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:32 compute-0 sudo[85937]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:32 compute-0 sudo[85962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:32 compute-0 sudo[85962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:32 compute-0 sudo[85962]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:33 compute-0 sudo[85987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:33 compute-0 sudo[85987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:33 compute-0 sudo[85987]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:33 compute-0 sudo[86012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:29:33 compute-0 sudo[86012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:33 compute-0 sudo[86012]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:33 compute-0 sudo[86067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:33 compute-0 sudo[86067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:33 compute-0 sudo[86067]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:33 compute-0 sudo[86092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:29:33 compute-0 sudo[86092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:33 compute-0 sudo[86092]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:33 compute-0 sudo[86117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:29:33 compute-0 sudo[86117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:33 compute-0 sudo[86117]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:33 compute-0 sudo[86142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- inventory --format=json-pretty --filter-for-batch
Jan 27 08:29:33 compute-0 sudo[86142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:29:33 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1541991168; not ready for session (expect reconnect)
Jan 27 08:29:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:33 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:29:33 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2506616633; not ready for session (expect reconnect)
Jan 27 08:29:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:33 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 27 08:29:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: purged_snaps scrub starts
Jan 27 08:29:34 compute-0 ceph-mon[74357]: purged_snaps scrub ok
Jan 27 08:29:34 compute-0 ceph-mon[74357]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: OSD bench result of 9120.842119 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:29:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:34.085422305 +0000 UTC m=+0.113856256 container create e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:33.993488106 +0000 UTC m=+0.021922077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168] boot
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:34 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:34 compute-0 systemd[1]: Started libpod-conmon-e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5.scope.
Jan 27 08:29:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:34.195128598 +0000 UTC m=+0.223562569 container init e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:34.20144612 +0000 UTC m=+0.229880071 container start e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:29:34 compute-0 tender_montalcini[86223]: 167 167
Jan 27 08:29:34 compute-0 systemd[1]: libpod-e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5.scope: Deactivated successfully.
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:34.213476927 +0000 UTC m=+0.241910878 container attach e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:34.213771415 +0000 UTC m=+0.242205366 container died e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-91ea636e04ff68a2dfaed96648501755c17686edd3aebe8cd302f68970f6d9cc-merged.mount: Deactivated successfully.
Jan 27 08:29:34 compute-0 podman[86206]: 2026-01-27 08:29:34.301460739 +0000 UTC m=+0.329894690 container remove e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:34 compute-0 systemd[1]: libpod-conmon-e146054bed117e99edaeb96fc5059da6210e6949642786f9aa6a2728b50f62a5.scope: Deactivated successfully.
Jan 27 08:29:34 compute-0 podman[86248]: 2026-01-27 08:29:34.435434291 +0000 UTC m=+0.037623153 container create dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:34 compute-0 systemd[1]: Started libpod-conmon-dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9.scope.
Jan 27 08:29:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:34 compute-0 podman[86248]: 2026-01-27 08:29:34.419566061 +0000 UTC m=+0.021754933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:29:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ec4bbac15e9baa201d11e741c8dd06e5e3a64ee0e1b33de0a18c8dfb1dea4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ec4bbac15e9baa201d11e741c8dd06e5e3a64ee0e1b33de0a18c8dfb1dea4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ec4bbac15e9baa201d11e741c8dd06e5e3a64ee0e1b33de0a18c8dfb1dea4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ec4bbac15e9baa201d11e741c8dd06e5e3a64ee0e1b33de0a18c8dfb1dea4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:34 compute-0 podman[86248]: 2026-01-27 08:29:34.52732599 +0000 UTC m=+0.129514842 container init dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:34 compute-0 podman[86248]: 2026-01-27 08:29:34.539315386 +0000 UTC m=+0.141504238 container start dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:29:34 compute-0 podman[86248]: 2026-01-27 08:29:34.546331097 +0000 UTC m=+0.148519949 container attach dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:29:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:34 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2506616633; not ready for session (expect reconnect)
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:34 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 37.980 iops: 9722.984 elapsed_sec: 0.309
Jan 27 08:29:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [WRN] : OSD bench result of 9722.984113 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 0 waiting for initial osdmap
Jan 27 08:29:34 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0[84947]: 2026-01-27T08:29:34.798+0000 7fd400d5a640 -1 osd.0 0 waiting for initial osdmap
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 check_osdmap_features require_osd_release unknown -> reef
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 27 08:29:34 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-osd-0[84947]: 2026-01-27T08:29:34.818+0000 7fd3fc382640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:29:34 compute-0 ceph-osd[84951]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:34 compute-0 ceph-mgr[74650]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 27 08:29:34 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 27 08:29:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 27 08:29:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mgr[74650]: [devicehealth INFO root] creating mgr pool
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: osd.1 [v2:192.168.122.101:6800/1541991168,v1:192.168.122.101:6801/1541991168] boot
Jan 27 08:29:35 compute-0 ceph-mon[74357]: osdmap e10: 2 total, 1 up, 2 in
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633] boot
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 27 08:29:35 compute-0 ceph-osd[84951]: osd.0 11 state: booting -> active
Jan 27 08:29:35 compute-0 ceph-osd[84951]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 27 08:29:35 compute-0 ceph-osd[84951]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 27 08:29:35 compute-0 ceph-osd[84951]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 27 08:29:35 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]: [
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:     {
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "available": false,
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "ceph_device": false,
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "lsm_data": {},
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "lvs": [],
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "path": "/dev/sr0",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "rejected_reasons": [
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "Insufficient space (<5GB)",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "Has a FileSystem"
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         ],
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         "sys_api": {
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "actuators": null,
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "device_nodes": "sr0",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "devname": "sr0",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "human_readable_size": "482.00 KB",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "id_bus": "ata",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "model": "QEMU DVD-ROM",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "nr_requests": "2",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "parent": "/dev/sr0",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "partitions": {},
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "path": "/dev/sr0",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "removable": "1",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "rev": "2.5+",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "ro": "0",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "rotational": "1",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "sas_address": "",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "sas_device_handle": "",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "scheduler_mode": "mq-deadline",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "sectors": 0,
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "sectorsize": "2048",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "size": 493568.0,
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "support_discard": "2048",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "type": "disk",
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:             "vendor": "QEMU"
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:         }
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]:     }
Jan 27 08:29:35 compute-0 focused_hofstadter[86264]: ]
Jan 27 08:29:35 compute-0 systemd[1]: libpod-dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9.scope: Deactivated successfully.
Jan 27 08:29:35 compute-0 podman[86248]: 2026-01-27 08:29:35.591598177 +0000 UTC m=+1.193787029 container died dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:35 compute-0 systemd[1]: libpod-dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9.scope: Consumed 1.046s CPU time.
Jan 27 08:29:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ec4bbac15e9baa201d11e741c8dd06e5e3a64ee0e1b33de0a18c8dfb1dea4e-merged.mount: Deactivated successfully.
Jan 27 08:29:35 compute-0 podman[86248]: 2026-01-27 08:29:35.643951991 +0000 UTC m=+1.246140843 container remove dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:35 compute-0 systemd[1]: libpod-conmon-dd0df5419ab5c96936622c56aae11fab29a3b9e75b515d0f098782ccbb9231e9.scope: Deactivated successfully.
Jan 27 08:29:35 compute-0 sudo[86142]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 27 08:29:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:35 compute-0 ceph-mgr[74650]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 27 08:29:35 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 27 08:29:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 27 08:29:35 compute-0 ceph-mgr[74650]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 27 08:29:35 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 27 08:29:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 27 08:29:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 27 08:29:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 27 08:29:36 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 27 08:29:36 compute-0 ceph-mon[74357]: OSD bench result of 9722.984113 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:29:36 compute-0 ceph-mon[74357]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 27 08:29:36 compute-0 ceph-mon[74357]: osd.0 [v2:192.168.122.100:6802/2506616633,v1:192.168.122.100:6803/2506616633] boot
Jan 27 08:29:36 compute-0 ceph-mon[74357]: osdmap e11: 2 total, 2 up, 2 in
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:36 compute-0 ceph-mon[74357]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 27 08:29:36 compute-0 ceph-mon[74357]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 27 08:29:36 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:29:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] creating main.db for devicehealth
Jan 27 08:29:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Check health
Jan 27 08:29:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 27 08:29:36 compute-0 sudo[87372]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 27 08:29:36 compute-0 sudo[87372]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 08:29:36 compute-0 sudo[87372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 27 08:29:36 compute-0 sudo[87372]: pam_unix(sudo:session): session closed for user root
Jan 27 08:29:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 27 08:29:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 27 08:29:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:29:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 27 08:29:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 27 08:29:37 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 27 08:29:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 27 08:29:37 compute-0 ceph-mon[74357]: osdmap e12: 2 total, 2 up, 2 in
Jan 27 08:29:37 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 27 08:29:37 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 27 08:29:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:29:37 compute-0 ceph-mon[74357]: pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:37 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vujqxq(active, since 82s)
Jan 27 08:29:38 compute-0 ceph-mon[74357]: osdmap e13: 2 total, 2 up, 2 in
Jan 27 08:29:38 compute-0 ceph-mon[74357]: mgrmap e9: compute-0.vujqxq(active, since 82s)
Jan 27 08:29:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:39 compute-0 ceph-mon[74357]: pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:40 compute-0 ceph-mon[74357]: pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:42 compute-0 ceph-mon[74357]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:44 compute-0 ceph-mon[74357]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:46 compute-0 ceph-mon[74357]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:48 compute-0 ceph-mon[74357]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:29:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:29:50 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 27 08:29:50 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:51 compute-0 ceph-mon[74357]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:29:51 compute-0 ceph-mon[74357]: Updating compute-2:/etc/ceph/ceph.conf
Jan 27 08:29:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:51 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:29:51 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:29:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:52 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:29:52 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:29:52 compute-0 ceph-mon[74357]: Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:29:52 compute-0 ceph-mon[74357]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:52 compute-0 ceph-mon[74357]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:29:53 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:29:53 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:29:53 compute-0 ceph-mon[74357]: Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.client.admin.keyring
Jan 27 08:29:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:29:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:29:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:54 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 4442c689-6cb8-439a-82e3-38729b22bcdf (Updating mon deployment (+2 -> 3))
Jan 27 08:29:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 27 08:29:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:29:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 27 08:29:54 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:29:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:54 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:54 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 27 08:29:54 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 27 08:29:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:55 compute-0 ceph-mon[74357]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:29:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:29:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:55 compute-0 ceph-mon[74357]: Deploying daemon mon.compute-2 on compute-2
Jan 27 08:29:55 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 27 08:29:55 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:29:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:56 compute-0 ceph-mon[74357]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 27 08:29:56 compute-0 ceph-mon[74357]: Cluster is now healthy
Jan 27 08:29:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:29:57 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 27 08:29:57 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 27 08:29:57 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1601296315; not ready for session (expect reconnect)
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:29:57 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 27 08:29:57 compute-0 ceph-mon[74357]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:29:57 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:29:57 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:29:57 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 27 08:29:58 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1601296315; not ready for session (expect reconnect)
Jan 27 08:29:58 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:29:58 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:29:58 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 27 08:29:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:29:59 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:29:59 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 27 08:29:59 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 27 08:29:59 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 27 08:29:59 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:29:59 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:29:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:29:59 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 27 08:29:59 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1601296315; not ready for session (expect reconnect)
Jan 27 08:29:59 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:29:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:29:59 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 27 08:29:59 compute-0 sudo[87398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azznwwvkcppobfxkdhyjupbdixcsfclk ; /usr/bin/python3'
Jan 27 08:29:59 compute-0 sudo[87398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:29:59 compute-0 python3[87400]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:59 compute-0 podman[87402]: 2026-01-27 08:29:59.839033677 +0000 UTC m=+0.043040275 container create 34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56 (image=quay.io/ceph/ceph:v18, name=sad_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:59 compute-0 systemd[1]: Started libpod-conmon-34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56.scope.
Jan 27 08:29:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd410d49886b440a13d798c48ce1d1fa7af0817f2c5b1b5c0a59e6d6fac5fc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd410d49886b440a13d798c48ce1d1fa7af0817f2c5b1b5c0a59e6d6fac5fc7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd410d49886b440a13d798c48ce1d1fa7af0817f2c5b1b5c0a59e6d6fac5fc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:59 compute-0 podman[87402]: 2026-01-27 08:29:59.820212749 +0000 UTC m=+0.024219367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:29:59 compute-0 podman[87402]: 2026-01-27 08:29:59.92900767 +0000 UTC m=+0.133014268 container init 34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56 (image=quay.io/ceph/ceph:v18, name=sad_montalcini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:29:59 compute-0 podman[87402]: 2026-01-27 08:29:59.940830775 +0000 UTC m=+0.144837383 container start 34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56 (image=quay.io/ceph/ceph:v18, name=sad_montalcini, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:59 compute-0 podman[87402]: 2026-01-27 08:29:59.947823556 +0000 UTC m=+0.151830174 container attach 34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56 (image=quay.io/ceph/ceph:v18, name=sad_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:00 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:00 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:00 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 27 08:30:00 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1601296315; not ready for session (expect reconnect)
Jan 27 08:30:00 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:30:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:00 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 27 08:30:00 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 27 08:30:00 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 27 08:30:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:00 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 27 08:30:01 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 27 08:30:01 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:01 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:01 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 27 08:30:01 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1601296315; not ready for session (expect reconnect)
Jan 27 08:30:01 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:30:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:01 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 27 08:30:01 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1601296315; not ready for session (expect reconnect)
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 27 08:30:02 compute-0 ceph-mon[74357]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vujqxq(active, since 107s)
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 4442c689-6cb8-439a-82e3-38729b22bcdf (Updating mon deployment (+2 -> 3))
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 4442c689-6cb8-439a-82e3-38729b22bcdf (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev ed23beef-d972-4d7f-af1c-0a9809901b46 (Updating mgr deployment (+2 -> 3))
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.cbywrc on compute-2
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.cbywrc on compute-2
Jan 27 08:30:02 compute-0 ceph-mon[74357]: Deploying daemon mon.compute-1 on compute-1
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0 calling monitor election
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-2 calling monitor election
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 27 08:30:02 compute-0 ceph-mon[74357]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:30:02 compute-0 ceph-mon[74357]: fsmap 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: osdmap e13: 2 total, 2 up, 2 in
Jan 27 08:30:02 compute-0 ceph-mon[74357]: mgrmap e9: compute-0.vujqxq(active, since 107s)
Jan 27 08:30:02 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:30:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 27 08:30:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 27 08:30:03 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:03 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 27 08:30:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:03 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 27 08:30:03 compute-0 ceph-mon[74357]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 27 08:30:03 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 27 08:30:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:03 compute-0 ceph-mgr[74650]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 27 08:30:03 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:30:03.241+0000 7fe0fd675640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:03 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:04 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:04 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:04 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:04 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 27 08:30:04 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:04 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:04 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:05 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:05 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:05 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:05 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 27 08:30:05 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 3 completed events
Jan 27 08:30:05 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:30:05 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:06 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:06 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:06 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 27 08:30:06 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:06 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:06 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:06 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:07 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:07 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:07 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 27 08:30:07 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:07 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:07 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 27 08:30:08 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 27 08:30:08 compute-0 ceph-mon[74357]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vujqxq(active, since 113s)
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.jqbgxp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jqbgxp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jqbgxp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.jqbgxp on compute-1
Jan 27 08:30:08 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.jqbgxp on compute-1
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0 calling monitor election
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-2 calling monitor election
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-1 calling monitor election
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 27 08:30:08 compute-0 ceph-mon[74357]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 27 08:30:08 compute-0 ceph-mon[74357]: fsmap 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: osdmap e13: 2 total, 2 up, 2 in
Jan 27 08:30:08 compute-0 ceph-mon[74357]: mgrmap e9: compute-0.vujqxq(active, since 113s)
Jan 27 08:30:08 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jqbgxp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:30:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:09 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/342125067; not ready for session (expect reconnect)
Jan 27 08:30:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 27 08:30:09 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jqbgxp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 27 08:30:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:30:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:09 compute-0 ceph-mon[74357]: Deploying daemon mgr.compute-1.jqbgxp on compute-1
Jan 27 08:30:09 compute-0 ceph-mon[74357]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 27 08:30:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 27 08:30:09 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622189321' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:30:09 compute-0 sad_montalcini[87418]: 
Jan 27 08:30:09 compute-0 sad_montalcini[87418]: {"fsid":"281e9bde-2795-59f4-98ac-90cf5b49a2de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":1,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1769502575,"num_in_osds":2,"osd_in_since":1769502557,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55779328,"bytes_avail":14968217600,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-27T08:29:16.741143+0000","services":{}},"progress_events":{"ed23beef-d972-4d7f-af1c-0a9809901b46":{"message":"Updating mgr deployment (+2 -> 3) (5s)\n      [==============..............] (remaining: 5s)","progress":0.5,"add_to_ceph_s":true}}}
Jan 27 08:30:09 compute-0 systemd[1]: libpod-34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56.scope: Deactivated successfully.
Jan 27 08:30:09 compute-0 podman[87402]: 2026-01-27 08:30:09.567046895 +0000 UTC m=+9.771053503 container died 34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56 (image=quay.io/ceph/ceph:v18, name=sad_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcd410d49886b440a13d798c48ce1d1fa7af0817f2c5b1b5c0a59e6d6fac5fc7-merged.mount: Deactivated successfully.
Jan 27 08:30:09 compute-0 podman[87402]: 2026-01-27 08:30:09.614919231 +0000 UTC m=+9.818925829 container remove 34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56 (image=quay.io/ceph/ceph:v18, name=sad_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:09 compute-0 systemd[1]: libpod-conmon-34e1e35603fd26857e76398a91fbfa501775852c995566dbeccbf7088435da56.scope: Deactivated successfully.
Jan 27 08:30:09 compute-0 sudo[87398]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:30:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:30:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 27 08:30:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:09 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev ed23beef-d972-4d7f-af1c-0a9809901b46 (Updating mgr deployment (+2 -> 3))
Jan 27 08:30:09 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event ed23beef-d972-4d7f-af1c-0a9809901b46 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Jan 27 08:30:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 27 08:30:09 compute-0 sudo[87481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meieftoekagbmqctjnizvzfcjjbmndqw ; /usr/bin/python3'
Jan 27 08:30:09 compute-0 sudo[87481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:10 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 57302f1d-b25a-4e2c-b8f8-05a105a5a572 (Updating crash deployment (+1 -> 3))
Jan 27 08:30:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 27 08:30:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:30:10 compute-0 ceph-mgr[74650]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 27 08:30:10 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T08:30:10.076+0000 7fe0fd675640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 27 08:30:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:30:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:10 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:10 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 27 08:30:10 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 27 08:30:10 compute-0 python3[87483]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:10 compute-0 podman[87484]: 2026-01-27 08:30:10.140746554 +0000 UTC m=+0.040942917 container create af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882 (image=quay.io/ceph/ceph:v18, name=amazing_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:10 compute-0 systemd[1]: Started libpod-conmon-af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882.scope.
Jan 27 08:30:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f0bc3e7751d0c828ece2ae74f0c8fa5f513e27171309be11b79848c45cafd8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f0bc3e7751d0c828ece2ae74f0c8fa5f513e27171309be11b79848c45cafd8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:10 compute-0 podman[87484]: 2026-01-27 08:30:10.203676153 +0000 UTC m=+0.103872526 container init af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882 (image=quay.io/ceph/ceph:v18, name=amazing_nobel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:10 compute-0 podman[87484]: 2026-01-27 08:30:10.209628338 +0000 UTC m=+0.109824701 container start af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882 (image=quay.io/ceph/ceph:v18, name=amazing_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 27 08:30:10 compute-0 podman[87484]: 2026-01-27 08:30:10.213284858 +0000 UTC m=+0.113481251 container attach af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882 (image=quay.io/ceph/ceph:v18, name=amazing_nobel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:10 compute-0 podman[87484]: 2026-01-27 08:30:10.125219007 +0000 UTC m=+0.025415400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/622189321' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:30:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 27 08:30:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1655433859' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 27 08:30:11 compute-0 ceph-mon[74357]: Deploying daemon crash.compute-2 on compute-2
Jan 27 08:30:11 compute-0 ceph-mon[74357]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1655433859' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1655433859' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 27 08:30:11 compute-0 amazing_nobel[87500]: pool 'vms' created
Jan 27 08:30:11 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 27 08:30:11 compute-0 systemd[1]: libpod-af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882.scope: Deactivated successfully.
Jan 27 08:30:11 compute-0 podman[87484]: 2026-01-27 08:30:11.822093398 +0000 UTC m=+1.722289761 container died af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882 (image=quay.io/ceph/ceph:v18, name=amazing_nobel, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-70f0bc3e7751d0c828ece2ae74f0c8fa5f513e27171309be11b79848c45cafd8-merged.mount: Deactivated successfully.
Jan 27 08:30:11 compute-0 podman[87484]: 2026-01-27 08:30:11.858095868 +0000 UTC m=+1.758292231 container remove af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882 (image=quay.io/ceph/ceph:v18, name=amazing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:30:11 compute-0 systemd[1]: libpod-conmon-af9d38c111e60a99fe2bdaa4e5096b8de1b79557b42e38c58e51bb668b798882.scope: Deactivated successfully.
Jan 27 08:30:11 compute-0 sudo[87481]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:11 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:12 compute-0 sudo[87564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdxsoavvqhpcwqnkyuqidsxcsvoptyn ; /usr/bin/python3'
Jan 27 08:30:12 compute-0 sudo[87564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:12 compute-0 python3[87566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:12 compute-0 podman[87567]: 2026-01-27 08:30:12.200765426 +0000 UTC m=+0.032626127 container create 1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4 (image=quay.io/ceph/ceph:v18, name=elegant_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:12 compute-0 systemd[1]: Started libpod-conmon-1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4.scope.
Jan 27 08:30:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c256930c265ccf577a49414178261686d91139819056e78bd02165a353de0943/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c256930c265ccf577a49414178261686d91139819056e78bd02165a353de0943/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:12 compute-0 podman[87567]: 2026-01-27 08:30:12.263772759 +0000 UTC m=+0.095633450 container init 1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4 (image=quay.io/ceph/ceph:v18, name=elegant_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:12 compute-0 podman[87567]: 2026-01-27 08:30:12.269611349 +0000 UTC m=+0.101472050 container start 1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4 (image=quay.io/ceph/ceph:v18, name=elegant_elion, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:12 compute-0 podman[87567]: 2026-01-27 08:30:12.272692504 +0000 UTC m=+0.104553235 container attach 1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4 (image=quay.io/ceph/ceph:v18, name=elegant_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:30:12 compute-0 podman[87567]: 2026-01-27 08:30:12.187615836 +0000 UTC m=+0.019476557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v65: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:12 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 27 08:30:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1655433859' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:12 compute-0 ceph-mon[74357]: osdmap e14: 2 total, 2 up, 2 in
Jan 27 08:30:12 compute-0 ceph-mon[74357]: pgmap v65: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 27 08:30:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2149860587' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 27 08:30:12 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 27 08:30:12 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:13 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 4 completed events
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mgr[74650]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 57302f1d-b25a-4e2c-b8f8-05a105a5a572 (Updating crash deployment (+1 -> 3))
Jan 27 08:30:13 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 57302f1d-b25a-4e2c-b8f8-05a105a5a572 (Updating crash deployment (+1 -> 3)) in 3 seconds
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:13 compute-0 sudo[87609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:13 compute-0 sudo[87609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:13 compute-0 sudo[87609]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:13 compute-0 sudo[87634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:13 compute-0 sudo[87634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:13 compute-0 sudo[87634]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:13 compute-0 sudo[87659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:13 compute-0 sudo[87659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:13 compute-0 sudo[87659]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:13 compute-0 sudo[87684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:30:13 compute-0 sudo[87684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.790107033 +0000 UTC m=+0.048997479 container create 3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:13 compute-0 ceph-mon[74357]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2149860587' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: osdmap e15: 2 total, 2 up, 2 in
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:30:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:13 compute-0 systemd[1]: Started libpod-conmon-3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726.scope.
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2149860587' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 27 08:30:13 compute-0 elegant_elion[87583]: pool 'volumes' created
Jan 27 08:30:13 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 27 08:30:13 compute-0 podman[87567]: 2026-01-27 08:30:13.847003316 +0000 UTC m=+1.678864017 container died 1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4 (image=quay.io/ceph/ceph:v18, name=elegant_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:30:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:13 compute-0 systemd[1]: libpod-1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4.scope: Deactivated successfully.
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.862830451 +0000 UTC m=+0.121720917 container init 3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.770515704 +0000 UTC m=+0.029406200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.870105121 +0000 UTC m=+0.128995567 container start 3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:13 compute-0 sweet_swanson[87763]: 167 167
Jan 27 08:30:13 compute-0 systemd[1]: libpod-3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726.scope: Deactivated successfully.
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.8744225 +0000 UTC m=+0.133312956 container attach 3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.87481133 +0000 UTC m=+0.133701776 container died 3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c256930c265ccf577a49414178261686d91139819056e78bd02165a353de0943-merged.mount: Deactivated successfully.
Jan 27 08:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1164ab5eace6670bac7d5ec504ed11f75b0f6254a03b3d9c9ce8a65276500d-merged.mount: Deactivated successfully.
Jan 27 08:30:13 compute-0 podman[87747]: 2026-01-27 08:30:13.918161352 +0000 UTC m=+0.177051798 container remove 3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:13 compute-0 systemd[1]: libpod-conmon-3e13d4d7651adde623772fdb9d29d4eb6119db9e9b7085831cb2cb87f5452726.scope: Deactivated successfully.
Jan 27 08:30:13 compute-0 podman[87567]: 2026-01-27 08:30:13.951645052 +0000 UTC m=+1.783505763 container remove 1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4 (image=quay.io/ceph/ceph:v18, name=elegant_elion, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:30:13 compute-0 systemd[1]: libpod-conmon-1bf171325054700590578655fc7c3a8e5ff392d429e6a8d2df53043670732ca4.scope: Deactivated successfully.
Jan 27 08:30:13 compute-0 sudo[87564]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:14 compute-0 podman[87802]: 2026-01-27 08:30:14.064152844 +0000 UTC m=+0.036919785 container create d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:30:14 compute-0 systemd[1]: Started libpod-conmon-d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231.scope.
Jan 27 08:30:14 compute-0 sudo[87839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydugefdusqyxmxxowkexslfiqkenioq ; /usr/bin/python3'
Jan 27 08:30:14 compute-0 sudo[87839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d4f3042db4ad54b7740664389ddb8bbbb3f5b23f08180337fc00d1e351d50a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d4f3042db4ad54b7740664389ddb8bbbb3f5b23f08180337fc00d1e351d50a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d4f3042db4ad54b7740664389ddb8bbbb3f5b23f08180337fc00d1e351d50a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d4f3042db4ad54b7740664389ddb8bbbb3f5b23f08180337fc00d1e351d50a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d4f3042db4ad54b7740664389ddb8bbbb3f5b23f08180337fc00d1e351d50a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 podman[87802]: 2026-01-27 08:30:14.048406332 +0000 UTC m=+0.021173293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:14 compute-0 podman[87802]: 2026-01-27 08:30:14.146657893 +0000 UTC m=+0.119424854 container init d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:14 compute-0 podman[87802]: 2026-01-27 08:30:14.15640563 +0000 UTC m=+0.129172571 container start d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:30:14 compute-0 podman[87802]: 2026-01-27 08:30:14.161360686 +0000 UTC m=+0.134127637 container attach d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:30:14 compute-0 python3[87845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:14 compute-0 podman[87849]: 2026-01-27 08:30:14.306895737 +0000 UTC m=+0.043478926 container create 1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a (image=quay.io/ceph/ceph:v18, name=amazing_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:14 compute-0 systemd[1]: Started libpod-conmon-1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a.scope.
Jan 27 08:30:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d081dfb37cdc1efa836f3356f16796cbcd873a50edec060c834ebbf66ee9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d081dfb37cdc1efa836f3356f16796cbcd873a50edec060c834ebbf66ee9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:14 compute-0 podman[87849]: 2026-01-27 08:30:14.384249793 +0000 UTC m=+0.120833012 container init 1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a (image=quay.io/ceph/ceph:v18, name=amazing_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:30:14 compute-0 podman[87849]: 2026-01-27 08:30:14.290535477 +0000 UTC m=+0.027118696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:14 compute-0 podman[87849]: 2026-01-27 08:30:14.391979735 +0000 UTC m=+0.128562934 container start 1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a (image=quay.io/ceph/ceph:v18, name=amazing_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:14 compute-0 podman[87849]: 2026-01-27 08:30:14.395575105 +0000 UTC m=+0.132158314 container attach 1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a (image=quay.io/ceph/ceph:v18, name=amazing_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:30:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 27 08:30:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2149860587' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:14 compute-0 ceph-mon[74357]: osdmap e16: 2 total, 2 up, 2 in
Jan 27 08:30:14 compute-0 ceph-mon[74357]: pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 27 08:30:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 27 08:30:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:30:14
Jan 27 08:30:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:30:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Some PGs (0.333333) are unknown; try again later
Jan 27 08:30:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 27 08:30:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4218364480' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:14 compute-0 exciting_meitner[87843]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:30:14 compute-0 exciting_meitner[87843]: --> relative data size: 1.0
Jan 27 08:30:14 compute-0 exciting_meitner[87843]: --> All data devices are unavailable
Jan 27 08:30:14 compute-0 systemd[1]: libpod-d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231.scope: Deactivated successfully.
Jan 27 08:30:14 compute-0 podman[87802]: 2026-01-27 08:30:14.994524768 +0000 UTC m=+0.967291709 container died d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:30:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d4f3042db4ad54b7740664389ddb8bbbb3f5b23f08180337fc00d1e351d50a-merged.mount: Deactivated successfully.
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:15 compute-0 podman[87802]: 2026-01-27 08:30:15.049255132 +0000 UTC m=+1.022022073 container remove d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:15 compute-0 systemd[1]: libpod-conmon-d91c3386365a8804ca0b50703d705f6d3309c5ca7a2dd46ac2996430746d7231.scope: Deactivated successfully.
Jan 27 08:30:15 compute-0 sudo[87684]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:15 compute-0 sudo[87911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:15 compute-0 sudo[87911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:15 compute-0 sudo[87911]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:15 compute-0 sudo[87936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:15 compute-0 sudo[87936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:15 compute-0 sudo[87936]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:15 compute-0 sudo[87961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:15 compute-0 sudo[87961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:15 compute-0 sudo[87961]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:15 compute-0 sudo[87986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:30:15 compute-0 sudo[87986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.606155269 +0000 UTC m=+0.041870512 container create f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:30:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "1667e7f8-92ed-48ed-a4d5-c705a9a173cc"} v 0) v1
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1667e7f8-92ed-48ed-a4d5-c705a9a173cc"}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4218364480' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1667e7f8-92ed-48ed-a4d5-c705a9a173cc"}]': finished
Jan 27 08:30:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 27 08:30:15 compute-0 amazing_chandrasekhar[87864]: pool 'backups' created
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 27 08:30:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:15 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev e7c4700e-0fb7-473c-bed5-0d0f1664f3d8 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 27 08:30:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:30:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:30:15 compute-0 podman[87849]: 2026-01-27 08:30:15.642087027 +0000 UTC m=+1.378670216 container died 1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a (image=quay.io/ceph/ceph:v18, name=amazing_chandrasekhar, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:15 compute-0 systemd[1]: Started libpod-conmon-f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40.scope.
Jan 27 08:30:15 compute-0 systemd[1]: libpod-1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a.scope: Deactivated successfully.
Jan 27 08:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-301d081dfb37cdc1efa836f3356f16796cbcd873a50edec060c834ebbf66ee9c-merged.mount: Deactivated successfully.
Jan 27 08:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:15 compute-0 podman[87849]: 2026-01-27 08:30:15.677505 +0000 UTC m=+1.414088189 container remove 1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a (image=quay.io/ceph/ceph:v18, name=amazing_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.588661399 +0000 UTC m=+0.024376672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.687621028 +0000 UTC m=+0.123336291 container init f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:15 compute-0 sudo[87839]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.696666017 +0000 UTC m=+0.132381260 container start f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:30:15 compute-0 systemd[1]: libpod-conmon-1047d6905466cfb6e1e513673a662014ec97ee8c388320da598f0ca2db88875a.scope: Deactivated successfully.
Jan 27 08:30:15 compute-0 laughing_chatelet[88071]: 167 167
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.702134997 +0000 UTC m=+0.137850240 container attach f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:15 compute-0 systemd[1]: libpod-f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40.scope: Deactivated successfully.
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.703200417 +0000 UTC m=+0.138915660 container died f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c825c9dd32b1c305af592caa0e10bc02465ee1fa9b987ba49c036d5547526148-merged.mount: Deactivated successfully.
Jan 27 08:30:15 compute-0 podman[88052]: 2026-01-27 08:30:15.744774339 +0000 UTC m=+0.180489582 container remove f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:15 compute-0 systemd[1]: libpod-conmon-f03b4c84a142924771b1003afdf1c0dedf4ea97539fef8b42f5aad33a328db40.scope: Deactivated successfully.
Jan 27 08:30:15 compute-0 sudo[88120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zinsijhytpwkkfawmshmvlmxybdduqbx ; /usr/bin/python3'
Jan 27 08:30:15 compute-0 sudo[88120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:15 compute-0 ceph-mon[74357]: osdmap e17: 2 total, 2 up, 2 in
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4218364480' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3378768301' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1667e7f8-92ed-48ed-a4d5-c705a9a173cc"}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1667e7f8-92ed-48ed-a4d5-c705a9a173cc"}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4218364480' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1667e7f8-92ed-48ed-a4d5-c705a9a173cc"}]': finished
Jan 27 08:30:15 compute-0 ceph-mon[74357]: osdmap e18: 3 total, 2 up, 3 in
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:30:15 compute-0 podman[88128]: 2026-01-27 08:30:15.908623533 +0000 UTC m=+0.047138647 container create 94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_swanson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:30:15 compute-0 systemd[1]: Started libpod-conmon-94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7.scope.
Jan 27 08:30:15 compute-0 python3[88122]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb20254e1a8f8fe88ac76ded98303592e0263f4d4fb309baa9fb5e8efaded56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb20254e1a8f8fe88ac76ded98303592e0263f4d4fb309baa9fb5e8efaded56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb20254e1a8f8fe88ac76ded98303592e0263f4d4fb309baa9fb5e8efaded56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb20254e1a8f8fe88ac76ded98303592e0263f4d4fb309baa9fb5e8efaded56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:15 compute-0 podman[88128]: 2026-01-27 08:30:15.888772097 +0000 UTC m=+0.027287221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:15 compute-0 podman[88128]: 2026-01-27 08:30:15.992245052 +0000 UTC m=+0.130760196 container init 94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_swanson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:15 compute-0 podman[88128]: 2026-01-27 08:30:15.99909584 +0000 UTC m=+0.137610964 container start 94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:30:16 compute-0 podman[88128]: 2026-01-27 08:30:16.00492506 +0000 UTC m=+0.143440174 container attach 94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_swanson, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 08:30:16 compute-0 podman[88147]: 2026-01-27 08:30:16.034258446 +0000 UTC m=+0.048117423 container create a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10 (image=quay.io/ceph/ceph:v18, name=pensive_shaw, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:16 compute-0 systemd[1]: Started libpod-conmon-a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10.scope.
Jan 27 08:30:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fca2cd6b7360eedbd69004b9718ee1b42117873db529329ab65429bce3b4b1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fca2cd6b7360eedbd69004b9718ee1b42117873db529329ab65429bce3b4b1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:16 compute-0 podman[88147]: 2026-01-27 08:30:16.011971134 +0000 UTC m=+0.025830141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:16 compute-0 podman[88147]: 2026-01-27 08:30:16.108704622 +0000 UTC m=+0.122563629 container init a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10 (image=quay.io/ceph/ceph:v18, name=pensive_shaw, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 08:30:16 compute-0 podman[88147]: 2026-01-27 08:30:16.113404582 +0000 UTC m=+0.127263569 container start a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10 (image=quay.io/ceph/ceph:v18, name=pensive_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:16 compute-0 podman[88147]: 2026-01-27 08:30:16.116155087 +0000 UTC m=+0.130014104 container attach a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10 (image=quay.io/ceph/ceph:v18, name=pensive_shaw, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:30:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 27 08:30:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:30:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 27 08:30:16 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 27 08:30:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:16 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev bff847d3-5f04-4f71-822b-d473d1c3b62f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev e7c4700e-0fb7-473c-bed5-0d0f1664f3d8 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event e7c4700e-0fb7-473c-bed5-0d0f1664f3d8 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 1 seconds
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev bff847d3-5f04-4f71-822b-d473d1c3b62f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event bff847d3-5f04-4f71-822b-d473d1c3b62f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 0 seconds
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 27 08:30:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4287254966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v72: 4 pgs: 2 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:30:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:16 compute-0 zealous_swanson[88144]: {
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:     "0": [
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:         {
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "devices": [
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "/dev/loop3"
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             ],
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "lv_name": "ceph_lv0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "lv_size": "7511998464",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "name": "ceph_lv0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "tags": {
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.cluster_name": "ceph",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.crush_device_class": "",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.encrypted": "0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.osd_id": "0",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.type": "block",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:                 "ceph.vdo": "0"
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             },
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "type": "block",
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:             "vg_name": "ceph_vg0"
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:         }
Jan 27 08:30:16 compute-0 zealous_swanson[88144]:     ]
Jan 27 08:30:16 compute-0 zealous_swanson[88144]: }
Jan 27 08:30:16 compute-0 systemd[1]: libpod-94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7.scope: Deactivated successfully.
Jan 27 08:30:16 compute-0 podman[88128]: 2026-01-27 08:30:16.791959273 +0000 UTC m=+0.930474387 container died 94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bb20254e1a8f8fe88ac76ded98303592e0263f4d4fb309baa9fb5e8efaded56-merged.mount: Deactivated successfully.
Jan 27 08:30:16 compute-0 podman[88128]: 2026-01-27 08:30:16.842938234 +0000 UTC m=+0.981453348 container remove 94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:30:16 compute-0 systemd[1]: libpod-conmon-94443d1b56af400bdc43be48ca7031b8873056f6d74908dffeb4ac6f1fe159d7.scope: Deactivated successfully.
Jan 27 08:30:16 compute-0 sudo[87986]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:16 compute-0 sudo[88206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:16 compute-0 sudo[88206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:16 compute-0 sudo[88206]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2126813383' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 27 08:30:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:30:16 compute-0 ceph-mon[74357]: osdmap e19: 3 total, 2 up, 3 in
Jan 27 08:30:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4287254966' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:16 compute-0 ceph-mon[74357]: pgmap v72: 4 pgs: 2 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:16 compute-0 sudo[88231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:16 compute-0 sudo[88231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:16 compute-0 sudo[88231]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:17 compute-0 sudo[88256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:17 compute-0 sudo[88256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:17 compute-0 sudo[88256]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:17 compute-0 sudo[88281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:30:17 compute-0 sudo[88281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.440862709 +0000 UTC m=+0.112094902 container create 43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.350841345 +0000 UTC m=+0.022073568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:17 compute-0 systemd[1]: Started libpod-conmon-43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f.scope.
Jan 27 08:30:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.562662806 +0000 UTC m=+0.233895079 container init 43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.574245885 +0000 UTC m=+0.245478078 container start 43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.577348421 +0000 UTC m=+0.248580654 container attach 43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 08:30:17 compute-0 happy_mayer[88363]: 167 167
Jan 27 08:30:17 compute-0 systemd[1]: libpod-43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f.scope: Deactivated successfully.
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.584385524 +0000 UTC m=+0.255617747 container died 43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-589db922e88c79afe07b8db1d2fd18ada7303e8ed473c453c5fc9c4056deffeb-merged.mount: Deactivated successfully.
Jan 27 08:30:17 compute-0 podman[88347]: 2026-01-27 08:30:17.625637798 +0000 UTC m=+0.296869991 container remove 43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:30:17 compute-0 systemd[1]: libpod-conmon-43ccf92fab69d36fd8b4a8777b54cf59c5557188f49a98ca45b547056401de1f.scope: Deactivated successfully.
Jan 27 08:30:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 27 08:30:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4287254966' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:30:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Jan 27 08:30:17 compute-0 pensive_shaw[88164]: pool 'images' created
Jan 27 08:30:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Jan 27 08:30:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:17 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=11.171740532s) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active pruub 58.299301147s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=11.171740532s) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown pruub 58.299301147s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:17 compute-0 systemd[1]: libpod-a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10.scope: Deactivated successfully.
Jan 27 08:30:17 compute-0 podman[88147]: 2026-01-27 08:30:17.676002892 +0000 UTC m=+1.689861909 container died a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10 (image=quay.io/ceph/ceph:v18, name=pensive_shaw, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fca2cd6b7360eedbd69004b9718ee1b42117873db529329ab65429bce3b4b1d-merged.mount: Deactivated successfully.
Jan 27 08:30:17 compute-0 podman[88147]: 2026-01-27 08:30:17.735027305 +0000 UTC m=+1.748886292 container remove a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10 (image=quay.io/ceph/ceph:v18, name=pensive_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:30:17 compute-0 systemd[1]: libpod-conmon-a7872015d6c2693f1a720d827ce68330c5603807d5ccbc5b243447e84cc8fa10.scope: Deactivated successfully.
Jan 27 08:30:17 compute-0 sudo[88120]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:17 compute-0 podman[88402]: 2026-01-27 08:30:17.794198441 +0000 UTC m=+0.042965662 container create 68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:17 compute-0 systemd[1]: Started libpod-conmon-68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f.scope.
Jan 27 08:30:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dad5f18256f0b6c6fb6903b8ec0c3feaacbe4b4b6af30fc780977b8ab88bb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dad5f18256f0b6c6fb6903b8ec0c3feaacbe4b4b6af30fc780977b8ab88bb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dad5f18256f0b6c6fb6903b8ec0c3feaacbe4b4b6af30fc780977b8ab88bb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dad5f18256f0b6c6fb6903b8ec0c3feaacbe4b4b6af30fc780977b8ab88bb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:17 compute-0 podman[88402]: 2026-01-27 08:30:17.777856751 +0000 UTC m=+0.026624002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:17 compute-0 podman[88402]: 2026-01-27 08:30:17.878873438 +0000 UTC m=+0.127640679 container init 68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:30:17 compute-0 podman[88402]: 2026-01-27 08:30:17.885359337 +0000 UTC m=+0.134126558 container start 68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:30:17 compute-0 podman[88402]: 2026-01-27 08:30:17.89383775 +0000 UTC m=+0.142604981 container attach 68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:30:17 compute-0 sudo[88447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlhcbmjtgnbthqwscecrgwgnqcawrjr ; /usr/bin/python3'
Jan 27 08:30:17 compute-0 sudo[88447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:18 compute-0 python3[88449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:18 compute-0 podman[88450]: 2026-01-27 08:30:18.087429701 +0000 UTC m=+0.036312419 container create 029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7 (image=quay.io/ceph/ceph:v18, name=romantic_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:30:18 compute-0 systemd[1]: Started libpod-conmon-029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7.scope.
Jan 27 08:30:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bd57fabcb5b8f890ebc4104ca17e090df9f8417f77de15b60dea96e2ba34c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bd57fabcb5b8f890ebc4104ca17e090df9f8417f77de15b60dea96e2ba34c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:18 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 7 completed events
Jan 27 08:30:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:30:18 compute-0 podman[88450]: 2026-01-27 08:30:18.163462741 +0000 UTC m=+0.112345479 container init 029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7 (image=quay.io/ceph/ceph:v18, name=romantic_germain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:30:18 compute-0 podman[88450]: 2026-01-27 08:30:18.070971079 +0000 UTC m=+0.019853827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:18 compute-0 podman[88450]: 2026-01-27 08:30:18.170201956 +0000 UTC m=+0.119084674 container start 029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7 (image=quay.io/ceph/ceph:v18, name=romantic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:30:18 compute-0 podman[88450]: 2026-01-27 08:30:18.175869871 +0000 UTC m=+0.124752589 container attach 029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7 (image=quay.io/ceph/ceph:v18, name=romantic_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:30:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 27 08:30:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v74: 36 pgs: 2 active+clean, 1 creating+peering, 33 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]: {
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:         "osd_id": 0,
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:         "type": "bluestore"
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]:     }
Jan 27 08:30:18 compute-0 dreamy_lederberg[88419]: }
Jan 27 08:30:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 27 08:30:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/869782537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:18 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:18 compute-0 systemd[1]: libpod-68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f.scope: Deactivated successfully.
Jan 27 08:30:18 compute-0 podman[88402]: 2026-01-27 08:30:18.735582046 +0000 UTC m=+0.984349267 container died 68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-59dad5f18256f0b6c6fb6903b8ec0c3feaacbe4b4b6af30fc780977b8ab88bb8-merged.mount: Deactivated successfully.
Jan 27 08:30:18 compute-0 podman[88402]: 2026-01-27 08:30:18.80521123 +0000 UTC m=+1.053978451 container remove 68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:30:18 compute-0 systemd[1]: libpod-conmon-68c52bf7516d29fba34e7c5e7983eda0b3ccb76c774f8fe3cf8912fd6640b67f.scope: Deactivated successfully.
Jan 27 08:30:18 compute-0 sudo[88281]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:30:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Jan 27 08:30:19 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4287254966' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:30:19 compute-0 ceph-mon[74357]: osdmap e20: 3 total, 2 up, 3 in
Jan 27 08:30:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:19 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.9( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.8( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.7( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.6( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.5( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.4( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.2( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.3( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.11( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.10( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.12( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.13( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.14( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.16( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.15( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.17( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.18( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.19( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:19 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:30:19 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.8( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.7( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.2( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.3( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.11( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.16( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.17( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 21 pg[2.14( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:19 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 27 08:30:19 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 27 08:30:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 27 08:30:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/869782537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Jan 27 08:30:20 compute-0 romantic_germain[88466]: pool 'cephfs.cephfs.meta' created
Jan 27 08:30:20 compute-0 ceph-mon[74357]: pgmap v74: 36 pgs: 2 active+clean, 1 creating+peering, 33 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/869782537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:20 compute-0 ceph-mon[74357]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:20 compute-0 ceph-mon[74357]: osdmap e21: 3 total, 2 up, 3 in
Jan 27 08:30:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:20 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Jan 27 08:30:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:20 compute-0 systemd[1]: libpod-029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7.scope: Deactivated successfully.
Jan 27 08:30:20 compute-0 podman[88450]: 2026-01-27 08:30:20.435935213 +0000 UTC m=+2.384817941 container died 029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7 (image=quay.io/ceph/ceph:v18, name=romantic_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:20 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:20 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bd57fabcb5b8f890ebc4104ca17e090df9f8417f77de15b60dea96e2ba34c8-merged.mount: Deactivated successfully.
Jan 27 08:30:20 compute-0 podman[88450]: 2026-01-27 08:30:20.496379505 +0000 UTC m=+2.445262223 container remove 029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7 (image=quay.io/ceph/ceph:v18, name=romantic_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 08:30:20 compute-0 systemd[1]: libpod-conmon-029d5fc3c24c0a6edb643205bb041e9047bdb6f87feb9112743c1768fee95aa7.scope: Deactivated successfully.
Jan 27 08:30:20 compute-0 sudo[88447]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:20 compute-0 sudo[88558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufksdgyjunjoltfpgkjcoxqqusniyffh ; /usr/bin/python3'
Jan 27 08:30:20 compute-0 sudo[88558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v77: 37 pgs: 5 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:30:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:20 compute-0 python3[88560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:20 compute-0 podman[88561]: 2026-01-27 08:30:20.857193732 +0000 UTC m=+0.052786821 container create 2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39 (image=quay.io/ceph/ceph:v18, name=inspiring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:20 compute-0 systemd[1]: Started libpod-conmon-2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39.scope.
Jan 27 08:30:20 compute-0 podman[88561]: 2026-01-27 08:30:20.829237723 +0000 UTC m=+0.024830832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc34750fa7983cd42cfcba83f4d949546292b708ba33f34a425ad67cda6ba1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc34750fa7983cd42cfcba83f4d949546292b708ba33f34a425ad67cda6ba1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 compute-0 podman[88561]: 2026-01-27 08:30:20.949059277 +0000 UTC m=+0.144652386 container init 2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39 (image=quay.io/ceph/ceph:v18, name=inspiring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:30:20 compute-0 podman[88561]: 2026-01-27 08:30:20.958377703 +0000 UTC m=+0.153970792 container start 2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39 (image=quay.io/ceph/ceph:v18, name=inspiring_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 08:30:20 compute-0 podman[88561]: 2026-01-27 08:30:20.968615614 +0000 UTC m=+0.164208713 container attach 2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39 (image=quay.io/ceph/ceph:v18, name=inspiring_vaughan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 08:30:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 27 08:30:21 compute-0 ceph-mon[74357]: 2.1 scrub starts
Jan 27 08:30:21 compute-0 ceph-mon[74357]: 2.1 scrub ok
Jan 27 08:30:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/869782537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:21 compute-0 ceph-mon[74357]: osdmap e22: 3 total, 2 up, 3 in
Jan 27 08:30:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:21 compute-0 ceph-mon[74357]: pgmap v77: 37 pgs: 5 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:30:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Jan 27 08:30:21 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Jan 27 08:30:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:21 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:21 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 27 08:30:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3618494709' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 27 08:30:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:30:22 compute-0 ceph-mon[74357]: osdmap e23: 3 total, 2 up, 3 in
Jan 27 08:30:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3618494709' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 27 08:30:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3618494709' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Jan 27 08:30:22 compute-0 inspiring_vaughan[88577]: pool 'cephfs.cephfs.data' created
Jan 27 08:30:22 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Jan 27 08:30:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:22 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:22 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:22 compute-0 systemd[1]: libpod-2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39.scope: Deactivated successfully.
Jan 27 08:30:22 compute-0 podman[88561]: 2026-01-27 08:30:22.489645223 +0000 UTC m=+1.685238322 container died 2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39 (image=quay.io/ceph/ceph:v18, name=inspiring_vaughan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc34750fa7983cd42cfcba83f4d949546292b708ba33f34a425ad67cda6ba1e-merged.mount: Deactivated successfully.
Jan 27 08:30:22 compute-0 podman[88561]: 2026-01-27 08:30:22.529053626 +0000 UTC m=+1.724646715 container remove 2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39 (image=quay.io/ceph/ceph:v18, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:30:22 compute-0 systemd[75978]: Starting Mark boot as successful...
Jan 27 08:30:22 compute-0 systemd[75978]: Finished Mark boot as successful.
Jan 27 08:30:22 compute-0 systemd[1]: libpod-conmon-2c055c9318384bb7a4e92ef7fb021e86cbb2c8b64e204913e888193d6f18fd39.scope: Deactivated successfully.
Jan 27 08:30:22 compute-0 sudo[88558]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v80: 69 pgs: 36 active+clean, 33 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:22 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 27 08:30:22 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:22 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 27 08:30:22 compute-0 sudo[88638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjglssjhzqfezrbwpqlnikpydqsnrpls ; /usr/bin/python3'
Jan 27 08:30:22 compute-0 sudo[88638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:22 compute-0 python3[88640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:22 compute-0 podman[88641]: 2026-01-27 08:30:22.91702033 +0000 UTC m=+0.039019353 container create 36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1 (image=quay.io/ceph/ceph:v18, name=boring_lalande, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 27 08:30:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 27 08:30:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:22 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:22 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 27 08:30:22 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 27 08:30:22 compute-0 systemd[1]: Started libpod-conmon-36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1.scope.
Jan 27 08:30:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a6f4eb33dabd12a63171fcb6ad5c7e4d80bc994c155d007819bf273d36c895/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a6f4eb33dabd12a63171fcb6ad5c7e4d80bc994c155d007819bf273d36c895/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:22 compute-0 podman[88641]: 2026-01-27 08:30:22.901645087 +0000 UTC m=+0.023644130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:23 compute-0 podman[88641]: 2026-01-27 08:30:23.010078718 +0000 UTC m=+0.132077741 container init 36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1 (image=quay.io/ceph/ceph:v18, name=boring_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:30:23 compute-0 podman[88641]: 2026-01-27 08:30:23.015123007 +0000 UTC m=+0.137122030 container start 36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1 (image=quay.io/ceph/ceph:v18, name=boring_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:23 compute-0 podman[88641]: 2026-01-27 08:30:23.019345402 +0000 UTC m=+0.141344445 container attach 36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1 (image=quay.io/ceph/ceph:v18, name=boring_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 27 08:30:23 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3618494709' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:30:23 compute-0 ceph-mon[74357]: osdmap e24: 3 total, 2 up, 3 in
Jan 27 08:30:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:23 compute-0 ceph-mon[74357]: pgmap v80: 69 pgs: 36 active+clean, 33 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 27 08:30:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Jan 27 08:30:23 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Jan 27 08:30:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:23 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:23 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 27 08:30:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3858251393' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 27 08:30:23 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 27 08:30:23 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 27 08:30:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 27 08:30:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v82: 69 pgs: 36 active+clean, 33 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:24 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 27 08:30:24 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 27 08:30:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3858251393' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 27 08:30:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Jan 27 08:30:24 compute-0 boring_lalande[88656]: enabled application 'rbd' on pool 'vms'
Jan 27 08:30:24 compute-0 systemd[1]: libpod-36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1.scope: Deactivated successfully.
Jan 27 08:30:24 compute-0 podman[88641]: 2026-01-27 08:30:24.798053794 +0000 UTC m=+1.920052817 container died 36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1 (image=quay.io/ceph/ceph:v18, name=boring_lalande, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Jan 27 08:30:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:24 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5a6f4eb33dabd12a63171fcb6ad5c7e4d80bc994c155d007819bf273d36c895-merged.mount: Deactivated successfully.
Jan 27 08:30:24 compute-0 podman[88641]: 2026-01-27 08:30:24.945927669 +0000 UTC m=+2.067926692 container remove 36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1 (image=quay.io/ceph/ceph:v18, name=boring_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:24 compute-0 systemd[1]: libpod-conmon-36c44b705ea98cd5e8571beadf00202e870ef3ca981246d366fdd62ada1bd1b1.scope: Deactivated successfully.
Jan 27 08:30:24 compute-0 sudo[88638]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:25 compute-0 ceph-mon[74357]: 2.2 scrub starts
Jan 27 08:30:25 compute-0 ceph-mon[74357]: 2.2 scrub ok
Jan 27 08:30:25 compute-0 ceph-mon[74357]: Deploying daemon osd.2 on compute-2
Jan 27 08:30:25 compute-0 ceph-mon[74357]: 3.1 scrub starts
Jan 27 08:30:25 compute-0 ceph-mon[74357]: 3.1 scrub ok
Jan 27 08:30:25 compute-0 ceph-mon[74357]: osdmap e25: 3 total, 2 up, 3 in
Jan 27 08:30:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3858251393' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 27 08:30:25 compute-0 sudo[88716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsjaqzvuieoigrfogqewhjdnhfdkadz ; /usr/bin/python3'
Jan 27 08:30:25 compute-0 sudo[88716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:25 compute-0 python3[88718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:25 compute-0 podman[88719]: 2026-01-27 08:30:25.29632467 +0000 UTC m=+0.037494441 container create c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71 (image=quay.io/ceph/ceph:v18, name=laughing_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:30:25 compute-0 systemd[1]: Started libpod-conmon-c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71.scope.
Jan 27 08:30:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80187fb24db95fb49b84335d33a8a8c58ed8c0777262283d6bb53cec13d19a56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80187fb24db95fb49b84335d33a8a8c58ed8c0777262283d6bb53cec13d19a56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:25 compute-0 podman[88719]: 2026-01-27 08:30:25.351555999 +0000 UTC m=+0.092725790 container init c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71 (image=quay.io/ceph/ceph:v18, name=laughing_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:25 compute-0 podman[88719]: 2026-01-27 08:30:25.356553385 +0000 UTC m=+0.097723156 container start c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71 (image=quay.io/ceph/ceph:v18, name=laughing_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:25 compute-0 podman[88719]: 2026-01-27 08:30:25.359342792 +0000 UTC m=+0.100512563 container attach c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71 (image=quay.io/ceph/ceph:v18, name=laughing_chaplygin, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:25 compute-0 podman[88719]: 2026-01-27 08:30:25.281337659 +0000 UTC m=+0.022507460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:25 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 27 08:30:25 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 27 08:30:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 27 08:30:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328782982' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 27 08:30:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 27 08:30:26 compute-0 ceph-mon[74357]: 2.3 scrub starts
Jan 27 08:30:26 compute-0 ceph-mon[74357]: 2.3 scrub ok
Jan 27 08:30:26 compute-0 ceph-mon[74357]: pgmap v82: 69 pgs: 36 active+clean, 33 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:26 compute-0 ceph-mon[74357]: 2.4 scrub starts
Jan 27 08:30:26 compute-0 ceph-mon[74357]: 2.4 scrub ok
Jan 27 08:30:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3858251393' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 27 08:30:26 compute-0 ceph-mon[74357]: osdmap e26: 3 total, 2 up, 3 in
Jan 27 08:30:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2328782982' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 27 08:30:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328782982' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 27 08:30:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Jan 27 08:30:26 compute-0 laughing_chaplygin[88735]: enabled application 'rbd' on pool 'volumes'
Jan 27 08:30:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Jan 27 08:30:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:26 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:26 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:26 compute-0 systemd[1]: libpod-c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71.scope: Deactivated successfully.
Jan 27 08:30:26 compute-0 conmon[88735]: conmon c85a8222c554b9e2a585 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71.scope/container/memory.events
Jan 27 08:30:26 compute-0 podman[88719]: 2026-01-27 08:30:26.080172116 +0000 UTC m=+0.821341897 container died c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71 (image=quay.io/ceph/ceph:v18, name=laughing_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:30:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-80187fb24db95fb49b84335d33a8a8c58ed8c0777262283d6bb53cec13d19a56-merged.mount: Deactivated successfully.
Jan 27 08:30:26 compute-0 podman[88719]: 2026-01-27 08:30:26.125503141 +0000 UTC m=+0.866672912 container remove c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71 (image=quay.io/ceph/ceph:v18, name=laughing_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:30:26 compute-0 systemd[1]: libpod-conmon-c85a8222c554b9e2a58544b334b813bede999fa067cd6dda0ffc319c17293f71.scope: Deactivated successfully.
Jan 27 08:30:26 compute-0 sudo[88716]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:26 compute-0 sudo[88798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caojhyarqjmfqkjtorxeyksawmmaydrd ; /usr/bin/python3'
Jan 27 08:30:26 compute-0 sudo[88798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:26 compute-0 python3[88800]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:26 compute-0 podman[88801]: 2026-01-27 08:30:26.51001332 +0000 UTC m=+0.049629805 container create fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441 (image=quay.io/ceph/ceph:v18, name=vigilant_keldysh, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:30:26 compute-0 systemd[1]: Started libpod-conmon-fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441.scope.
Jan 27 08:30:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b0078807b46332c8a744a4a6f691887767d48567aed274756193d506026376/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b0078807b46332c8a744a4a6f691887767d48567aed274756193d506026376/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 compute-0 podman[88801]: 2026-01-27 08:30:26.489107036 +0000 UTC m=+0.028723551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:26 compute-0 podman[88801]: 2026-01-27 08:30:26.589796323 +0000 UTC m=+0.129412818 container init fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441 (image=quay.io/ceph/ceph:v18, name=vigilant_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:30:26 compute-0 podman[88801]: 2026-01-27 08:30:26.596760344 +0000 UTC m=+0.136376849 container start fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441 (image=quay.io/ceph/ceph:v18, name=vigilant_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:30:26 compute-0 podman[88801]: 2026-01-27 08:30:26.60058519 +0000 UTC m=+0.140201695 container attach fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441 (image=quay.io/ceph/ceph:v18, name=vigilant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:30:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:30:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:30:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 27 08:30:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:27 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168497086s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.702751160s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.163395882s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.697654724s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.169357300s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703659058s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168401718s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.702751160s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.169263840s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703659058s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.163284302s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.697654724s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168324471s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.702774048s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168478012s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.702949524s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168201447s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.702774048s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168375969s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.702949524s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168414116s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703041077s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168387413s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703041077s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168164253s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703094482s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168185234s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703147888s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168201447s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703178406s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168125153s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703094482s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168152809s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703147888s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168170929s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703178406s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168032646s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703178406s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168010712s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703178406s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168770790s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.704002380s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168742180s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.704002380s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168421745s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703704834s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168457985s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703773499s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168382645s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703704834s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168429375s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703773499s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168436050s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703865051s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168411255s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703865051s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168236732s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 64.703971863s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.168199539s) [1] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.703971863s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:27 compute-0 ceph-mon[74357]: 2.5 scrub starts
Jan 27 08:30:27 compute-0 ceph-mon[74357]: 2.5 scrub ok
Jan 27 08:30:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2328782982' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 27 08:30:27 compute-0 ceph-mon[74357]: osdmap e27: 3 total, 2 up, 3 in
Jan 27 08:30:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:27 compute-0 ceph-mon[74357]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:27 compute-0 ceph-mon[74357]: pgmap v85: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1249326055' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 27 08:30:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 27 08:30:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:30:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:30:28 compute-0 ceph-mon[74357]: osdmap e28: 3 total, 2 up, 3 in
Jan 27 08:30:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1249326055' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 27 08:30:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1249326055' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 27 08:30:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Jan 27 08:30:28 compute-0 vigilant_keldysh[88816]: enabled application 'rbd' on pool 'backups'
Jan 27 08:30:28 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Jan 27 08:30:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:28 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:28 compute-0 systemd[1]: libpod-fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441.scope: Deactivated successfully.
Jan 27 08:30:28 compute-0 podman[88801]: 2026-01-27 08:30:28.128499207 +0000 UTC m=+1.668115712 container died fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441 (image=quay.io/ceph/ceph:v18, name=vigilant_keldysh, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:30:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-63b0078807b46332c8a744a4a6f691887767d48567aed274756193d506026376-merged.mount: Deactivated successfully.
Jan 27 08:30:28 compute-0 podman[88801]: 2026-01-27 08:30:28.192919947 +0000 UTC m=+1.732536452 container remove fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441 (image=quay.io/ceph/ceph:v18, name=vigilant_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:30:28 compute-0 systemd[1]: libpod-conmon-fbb49f4f79c5f7ce8b3ef220d0c20cdba25269d5423858ec5a8aeeb1861bf441.scope: Deactivated successfully.
Jan 27 08:30:28 compute-0 sudo[88798]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:28 compute-0 sudo[88880]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpgvsamzrvjhncvjlmfgnubydltijmck ; /usr/bin/python3'
Jan 27 08:30:28 compute-0 sudo[88880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:28 compute-0 python3[88882]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:28 compute-0 podman[88883]: 2026-01-27 08:30:28.540249334 +0000 UTC m=+0.041113461 container create 7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6 (image=quay.io/ceph/ceph:v18, name=wonderful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:28 compute-0 systemd[1]: Started libpod-conmon-7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6.scope.
Jan 27 08:30:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba53e93371583314b20d6973b134b2162b77bf93dbe01ab721708291a4938a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba53e93371583314b20d6973b134b2162b77bf93dbe01ab721708291a4938a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:28 compute-0 podman[88883]: 2026-01-27 08:30:28.601723434 +0000 UTC m=+0.102587591 container init 7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6 (image=quay.io/ceph/ceph:v18, name=wonderful_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:28 compute-0 podman[88883]: 2026-01-27 08:30:28.60810857 +0000 UTC m=+0.108972697 container start 7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6 (image=quay.io/ceph/ceph:v18, name=wonderful_lederberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:30:28 compute-0 podman[88883]: 2026-01-27 08:30:28.611362639 +0000 UTC m=+0.112226796 container attach 7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6 (image=quay.io/ceph/ceph:v18, name=wonderful_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:28 compute-0 podman[88883]: 2026-01-27 08:30:28.521761756 +0000 UTC m=+0.022625913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v88: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:28 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 27 08:30:28 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 27 08:30:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 27 08:30:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 27 08:30:28 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 4dc1d8d5-a12e-49d9-a6d5-f609f5cbe2ce (Global Recovery Event) in 16 seconds
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.cbywrc started
Jan 27 08:30:29 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mgr.compute-2.cbywrc 192.168.122.102:0/916222943; not ready for session (expect reconnect)
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vujqxq(active, since 2m), standbys: compute-2.cbywrc
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.cbywrc", "id": "compute-2.cbywrc"} v 0) v1
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-2.cbywrc", "id": "compute-2.cbywrc"}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1041750568' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Jan 27 08:30:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1249326055' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 27 08:30:29 compute-0 ceph-mon[74357]: osdmap e29: 3 total, 2 up, 3 in
Jan 27 08:30:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mon[74357]: pgmap v88: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:29 compute-0 ceph-mon[74357]: from='osd.2 [v2:192.168.122.102:6800/2215821541,v1:192.168.122.102:6801/2215821541]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mon[74357]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mon[74357]: Standby manager daemon compute-2.cbywrc started
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e30 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jqbgxp started
Jan 27 08:30:29 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from mgr.compute-1.jqbgxp 192.168.122.101:0/615236990; not ready for session (expect reconnect)
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:29 compute-0 sudo[88923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:29 compute-0 sudo[88923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:29 compute-0 sudo[88923]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:29 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 27 08:30:29 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 27 08:30:29 compute-0 sudo[88948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:30:29 compute-0 sudo[88948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:29 compute-0 sudo[88948]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 27 08:30:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1041750568' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 27 08:30:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 27 08:30:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Jan 27 08:30:30 compute-0 wonderful_lederberg[88899]: enabled application 'rbd' on pool 'images'
Jan 27 08:30:30 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Jan 27 08:30:30 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.850866318s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 73.573623657s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.850866318s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.573623657s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.979990959s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.702857971s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.979990959s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702857971s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.979908943s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.702827454s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:30 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.850635529s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 73.573600769s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.979802132s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.702796936s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.979908943s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702827454s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.979802132s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702796936s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980041504s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.703140259s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980041504s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703140259s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856924057s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 73.580093384s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856924057s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580093384s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980124474s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.703346252s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980124474s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703346252s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856647491s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 73.579963684s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856766701s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 73.580093384s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980442047s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.703773499s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980442047s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703773499s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856647491s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.579963684s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856766701s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580093384s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.850635529s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.573600769s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980451584s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 72.704048157s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856552124s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 73.580162048s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=12.980451584s) [] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.704048157s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.856552124s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580162048s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:30 compute-0 systemd[1]: libpod-7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6.scope: Deactivated successfully.
Jan 27 08:30:30 compute-0 podman[88883]: 2026-01-27 08:30:30.269193488 +0000 UTC m=+1.770057615 container died 7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6 (image=quay.io/ceph/ceph:v18, name=wonderful_lederberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:30 compute-0 ceph-mon[74357]: 2.7 scrub starts
Jan 27 08:30:30 compute-0 ceph-mon[74357]: 2.7 scrub ok
Jan 27 08:30:30 compute-0 ceph-mon[74357]: mgrmap e10: compute-0.vujqxq(active, since 2m), standbys: compute-2.cbywrc
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-2.cbywrc", "id": "compute-2.cbywrc"}]: dispatch
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1041750568' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='osd.2 [v2:192.168.122.102:6800/2215821541,v1:192.168.122.102:6801/2215821541]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 27 08:30:30 compute-0 ceph-mon[74357]: osdmap e30: 3 total, 2 up, 3 in
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 27 08:30:30 compute-0 ceph-mon[74357]: Standby manager daemon compute-1.jqbgxp started
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:30 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.vujqxq(active, since 2m), standbys: compute-2.cbywrc, compute-1.jqbgxp
Jan 27 08:30:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.jqbgxp", "id": "compute-1.jqbgxp"} v 0) v1
Jan 27 08:30:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jqbgxp", "id": "compute-1.jqbgxp"}]: dispatch
Jan 27 08:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ba53e93371583314b20d6973b134b2162b77bf93dbe01ab721708291a4938a2-merged.mount: Deactivated successfully.
Jan 27 08:30:30 compute-0 podman[88883]: 2026-01-27 08:30:30.319997034 +0000 UTC m=+1.820861161 container remove 7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6 (image=quay.io/ceph/ceph:v18, name=wonderful_lederberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:30 compute-0 systemd[1]: libpod-conmon-7659ef30c1075d6c20ab6f435640b0e7af1371b9dd8803ab3f4123d08bf1ebc6.scope: Deactivated successfully.
Jan 27 08:30:30 compute-0 sudo[88880]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:30 compute-0 sudo[89007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggsoozovvxrhuzabnuqvvojxbraromcb ; /usr/bin/python3'
Jan 27 08:30:30 compute-0 sudo[89007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:30 compute-0 python3[89009]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:30 compute-0 podman[89010]: 2026-01-27 08:30:30.659454644 +0000 UTC m=+0.042858189 container create 156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c (image=quay.io/ceph/ceph:v18, name=serene_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 27 08:30:30 compute-0 systemd[1]: Started libpod-conmon-156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c.scope.
Jan 27 08:30:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 27 08:30:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba124e12524d1c9adbe81c2c1879d62b1c04cc48740f61819722bfb600404d44/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba124e12524d1c9adbe81c2c1879d62b1c04cc48740f61819722bfb600404d44/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:30 compute-0 podman[89010]: 2026-01-27 08:30:30.73093911 +0000 UTC m=+0.114342685 container init 156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c (image=quay.io/ceph/ceph:v18, name=serene_euclid, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:30 compute-0 podman[89010]: 2026-01-27 08:30:30.736955605 +0000 UTC m=+0.120359150 container start 156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c (image=quay.io/ceph/ceph:v18, name=serene_euclid, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:30 compute-0 podman[89010]: 2026-01-27 08:30:30.643728583 +0000 UTC m=+0.027132148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:30 compute-0 podman[89010]: 2026-01-27 08:30:30.740776359 +0000 UTC m=+0.124179904 container attach 156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c (image=quay.io/ceph/ceph:v18, name=serene_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:30:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2173596239' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:31 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:31 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2173596239' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Jan 27 08:30:31 compute-0 serene_euclid[89026]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 27 08:30:31 compute-0 ceph-mon[74357]: 2.8 scrub starts
Jan 27 08:30:31 compute-0 ceph-mon[74357]: 2.8 scrub ok
Jan 27 08:30:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1041750568' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 27 08:30:31 compute-0 ceph-mon[74357]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 27 08:30:31 compute-0 ceph-mon[74357]: osdmap e31: 3 total, 2 up, 3 in
Jan 27 08:30:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mgrmap e11: compute-0.vujqxq(active, since 2m), standbys: compute-2.cbywrc, compute-1.jqbgxp
Jan 27 08:30:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jqbgxp", "id": "compute-1.jqbgxp"}]: dispatch
Jan 27 08:30:31 compute-0 ceph-mon[74357]: 3.2 scrub starts
Jan 27 08:30:31 compute-0 ceph-mon[74357]: 3.2 scrub ok
Jan 27 08:30:31 compute-0 ceph-mon[74357]: pgmap v91: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Jan 27 08:30:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:31 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:31 compute-0 systemd[1]: libpod-156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c.scope: Deactivated successfully.
Jan 27 08:30:31 compute-0 podman[89010]: 2026-01-27 08:30:31.745123526 +0000 UTC m=+1.128527071 container died 156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c (image=quay.io/ceph/ceph:v18, name=serene_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba124e12524d1c9adbe81c2c1879d62b1c04cc48740f61819722bfb600404d44-merged.mount: Deactivated successfully.
Jan 27 08:30:31 compute-0 podman[89010]: 2026-01-27 08:30:31.781688911 +0000 UTC m=+1.165092456 container remove 156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c (image=quay.io/ceph/ceph:v18, name=serene_euclid, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:30:31 compute-0 systemd[1]: libpod-conmon-156971bbeab7d5f6df782f91f05cf225f4de192bdedc40e4598b5f32887f993c.scope: Deactivated successfully.
Jan 27 08:30:31 compute-0 sudo[89007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:31 compute-0 sudo[89087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uitrsrvnnbqsauzbdzzyddszgbngyzpt ; /usr/bin/python3'
Jan 27 08:30:31 compute-0 sudo[89087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:32 compute-0 python3[89089]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:32 compute-0 podman[89090]: 2026-01-27 08:30:32.156688359 +0000 UTC m=+0.046133790 container create f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369 (image=quay.io/ceph/ceph:v18, name=blissful_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:32 compute-0 systemd[1]: Started libpod-conmon-f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369.scope.
Jan 27 08:30:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c697d8ddaf54153e97762ec8ddf7d40e2645e58e5548e87edf8d5c785d58c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c697d8ddaf54153e97762ec8ddf7d40e2645e58e5548e87edf8d5c785d58c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:32 compute-0 podman[89090]: 2026-01-27 08:30:32.225013656 +0000 UTC m=+0.114459107 container init f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369 (image=quay.io/ceph/ceph:v18, name=blissful_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:30:32 compute-0 podman[89090]: 2026-01-27 08:30:32.230125347 +0000 UTC m=+0.119570768 container start f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369 (image=quay.io/ceph/ceph:v18, name=blissful_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:30:32 compute-0 podman[89090]: 2026-01-27 08:30:32.136845103 +0000 UTC m=+0.026290574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:32 compute-0 podman[89090]: 2026-01-27 08:30:32.232823721 +0000 UTC m=+0.122269192 container attach f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369 (image=quay.io/ceph/ceph:v18, name=blissful_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:30:32 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:32 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 27 08:30:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3463856097' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 27 08:30:32 compute-0 ceph-mon[74357]: purged_snaps scrub starts
Jan 27 08:30:32 compute-0 ceph-mon[74357]: purged_snaps scrub ok
Jan 27 08:30:32 compute-0 ceph-mon[74357]: 2.11 scrub starts
Jan 27 08:30:32 compute-0 ceph-mon[74357]: 2.11 scrub ok
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2173596239' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 27 08:30:32 compute-0 ceph-mon[74357]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2173596239' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 27 08:30:32 compute-0 ceph-mon[74357]: osdmap e32: 3 total, 2 up, 3 in
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:33 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:33 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 27 08:30:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3463856097' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 27 08:30:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Jan 27 08:30:33 compute-0 blissful_dubinsky[89105]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 27 08:30:33 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Jan 27 08:30:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:33 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:33 compute-0 systemd[1]: libpod-f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369.scope: Deactivated successfully.
Jan 27 08:30:33 compute-0 podman[89130]: 2026-01-27 08:30:33.875687648 +0000 UTC m=+0.022519301 container died f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369 (image=quay.io/ceph/ceph:v18, name=blissful_dubinsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-35c697d8ddaf54153e97762ec8ddf7d40e2645e58e5548e87edf8d5c785d58c2-merged.mount: Deactivated successfully.
Jan 27 08:30:33 compute-0 podman[89130]: 2026-01-27 08:30:33.907823021 +0000 UTC m=+0.054654664 container remove f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369 (image=quay.io/ceph/ceph:v18, name=blissful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 08:30:33 compute-0 systemd[1]: libpod-conmon-f955da5c0ea56840f0201af5f1aa3db73423c7691e832e859cf73dd01309a369.scope: Deactivated successfully.
Jan 27 08:30:33 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 8 completed events
Jan 27 08:30:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:30:33 compute-0 sudo[89087]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:34 compute-0 ceph-mon[74357]: pgmap v93: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3463856097' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 27 08:30:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3463856097' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 27 08:30:34 compute-0 ceph-mon[74357]: osdmap e33: 3 total, 2 up, 3 in
Jan 27 08:30:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:34 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:34 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v95: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:34 compute-0 python3[89220]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:30:35 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:35 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:35 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:35 compute-0 python3[89291]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502634.6312892-37288-98912421496975/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:30:35 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 27 08:30:35 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:30:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:35 compute-0 ceph-mon[74357]: pgmap v95: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:35 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 27 08:30:35 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 27 08:30:35 compute-0 sudo[89391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwzuzuczzjqwwabkgejupnsrqfyzkoss ; /usr/bin/python3'
Jan 27 08:30:35 compute-0 sudo[89391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:35 compute-0 python3[89393]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:30:35 compute-0 sudo[89391]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:36 compute-0 sudo[89466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgjzdkkxmidfploxsxbyhmsxeklbjyeq ; /usr/bin/python3'
Jan 27 08:30:36 compute-0 sudo[89466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:36 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:36 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:36 compute-0 python3[89468]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502635.6152744-37302-227275235809474/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=db4348eca00ad360dd3ce21d74b2f5f10d6d572d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:30:36 compute-0 sudo[89466]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:36 compute-0 sudo[89516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaeyhsvmxdtbqgykaczzovbowygiletl ; /usr/bin/python3'
Jan 27 08:30:36 compute-0 sudo[89516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:36 compute-0 ceph-mon[74357]: 3.4 scrub starts
Jan 27 08:30:36 compute-0 ceph-mon[74357]: 3.4 scrub ok
Jan 27 08:30:36 compute-0 ceph-mon[74357]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 27 08:30:36 compute-0 ceph-mon[74357]: Cluster is now healthy
Jan 27 08:30:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:36 compute-0 python3[89518]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v96: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:36 compute-0 podman[89519]: 2026-01-27 08:30:36.682382093 +0000 UTC m=+0.035929447 container create aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9 (image=quay.io/ceph/ceph:v18, name=priceless_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:36 compute-0 systemd[1]: Started libpod-conmon-aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9.scope.
Jan 27 08:30:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37df6b51417d7c023c49245fb0f05e04859c14fb6326389664e02610f9f647ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37df6b51417d7c023c49245fb0f05e04859c14fb6326389664e02610f9f647ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37df6b51417d7c023c49245fb0f05e04859c14fb6326389664e02610f9f647ee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:36 compute-0 podman[89519]: 2026-01-27 08:30:36.758417994 +0000 UTC m=+0.111965448 container init aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9 (image=quay.io/ceph/ceph:v18, name=priceless_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:30:36 compute-0 podman[89519]: 2026-01-27 08:30:36.667457514 +0000 UTC m=+0.021004888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:36 compute-0 podman[89519]: 2026-01-27 08:30:36.7633905 +0000 UTC m=+0.116937854 container start aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9 (image=quay.io/ceph/ceph:v18, name=priceless_chatterjee, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:30:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 27 08:30:36 compute-0 podman[89519]: 2026-01-27 08:30:36.766902247 +0000 UTC m=+0.120449621 container attach aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9 (image=quay.io/ceph/ceph:v18, name=priceless_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:30:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 27 08:30:37 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 27 08:30:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849784202' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 27 08:30:37 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849784202' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:30:37 compute-0 priceless_chatterjee[89534]: 
Jan 27 08:30:37 compute-0 priceless_chatterjee[89534]: [global]
Jan 27 08:30:37 compute-0 priceless_chatterjee[89534]:         fsid = 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:30:37 compute-0 priceless_chatterjee[89534]:         mon_host = 192.168.122.100
Jan 27 08:30:37 compute-0 systemd[1]: libpod-aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9.scope: Deactivated successfully.
Jan 27 08:30:37 compute-0 podman[89559]: 2026-01-27 08:30:37.82827014 +0000 UTC m=+0.030527949 container died aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9 (image=quay.io/ceph/ceph:v18, name=priceless_chatterjee, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-37df6b51417d7c023c49245fb0f05e04859c14fb6326389664e02610f9f647ee-merged.mount: Deactivated successfully.
Jan 27 08:30:37 compute-0 ceph-mon[74357]: 2.14 scrub starts
Jan 27 08:30:37 compute-0 ceph-mon[74357]: 2.14 scrub ok
Jan 27 08:30:37 compute-0 ceph-mon[74357]: pgmap v96: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/849784202' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 27 08:30:37 compute-0 podman[89559]: 2026-01-27 08:30:37.869157334 +0000 UTC m=+0.071415123 container remove aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9 (image=quay.io/ceph/ceph:v18, name=priceless_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:30:37 compute-0 systemd[1]: libpod-conmon-aa97677c60d27532e0b3dba5ded96800649277731267a9f105cbc4937d0124d9.scope: Deactivated successfully.
Jan 27 08:30:37 compute-0 sudo[89516]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dceppuiihrnzzfbizvecgsjmmrwdqdzo ; /usr/bin/python3'
Jan 27 08:30:38 compute-0 sudo[89597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:38 compute-0 python3[89599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:38 compute-0 podman[89600]: 2026-01-27 08:30:38.29032294 +0000 UTC m=+0.040885834 container create 7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3 (image=quay.io/ceph/ceph:v18, name=vigorous_matsumoto, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:30:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:38 compute-0 systemd[1]: Started libpod-conmon-7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3.scope.
Jan 27 08:30:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fd913887b8e08395863dd5f25a2dedf5e61e043caad22dc42aaca4d25ecac5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fd913887b8e08395863dd5f25a2dedf5e61e043caad22dc42aaca4d25ecac5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fd913887b8e08395863dd5f25a2dedf5e61e043caad22dc42aaca4d25ecac5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:38 compute-0 podman[89600]: 2026-01-27 08:30:38.350459744 +0000 UTC m=+0.101022658 container init 7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3 (image=quay.io/ceph/ceph:v18, name=vigorous_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:30:38 compute-0 podman[89600]: 2026-01-27 08:30:38.356543461 +0000 UTC m=+0.107106355 container start 7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3 (image=quay.io/ceph/ceph:v18, name=vigorous_matsumoto, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:30:38 compute-0 podman[89600]: 2026-01-27 08:30:38.360313454 +0000 UTC m=+0.110876368 container attach 7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3 (image=quay.io/ceph/ceph:v18, name=vigorous_matsumoto, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:38 compute-0 podman[89600]: 2026-01-27 08:30:38.273968862 +0000 UTC m=+0.024531776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 27 08:30:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:30:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 sudo[89619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:38 compute-0 sudo[89619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89619]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 27 08:30:38 compute-0 sudo[89644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89644]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:38 compute-0 sudo[89669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89669]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:38 compute-0 sudo[89694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph
Jan 27 08:30:38 compute-0 sudo[89694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89694]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:38 compute-0 sudo[89732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89732]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:30:38 compute-0 sudo[89763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89763]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:38 compute-0 sudo[89788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89788]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 ceph-mon[74357]: 2.16 scrub starts
Jan 27 08:30:38 compute-0 ceph-mon[74357]: 2.16 scrub ok
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/849784202' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mon[74357]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 27 08:30:38 compute-0 ceph-mon[74357]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:30:38 compute-0 ceph-mon[74357]: Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mon[74357]: Updating compute-1:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mon[74357]: Updating compute-2:/etc/ceph/ceph.conf
Jan 27 08:30:38 compute-0 ceph-mon[74357]: pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 27 08:30:38 compute-0 sudo[89813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:30:38 compute-0 sudo[89813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89813]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:38 compute-0 sudo[89838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:38 compute-0 sudo[89838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:38 compute-0 sudo[89838]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 27 08:30:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1751593799' entity='client.admin' 
Jan 27 08:30:39 compute-0 vigorous_matsumoto[89615]: set ssl_option
Jan 27 08:30:39 compute-0 sudo[89863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:30:39 compute-0 sudo[89863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[89863]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 systemd[1]: libpod-7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3.scope: Deactivated successfully.
Jan 27 08:30:39 compute-0 podman[89600]: 2026-01-27 08:30:39.055866553 +0000 UTC m=+0.806429437 container died 7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3 (image=quay.io/ceph/ceph:v18, name=vigorous_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-59fd913887b8e08395863dd5f25a2dedf5e61e043caad22dc42aaca4d25ecac5-merged.mount: Deactivated successfully.
Jan 27 08:30:39 compute-0 podman[89600]: 2026-01-27 08:30:39.093320472 +0000 UTC m=+0.843883356 container remove 7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3 (image=quay.io/ceph/ceph:v18, name=vigorous_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:30:39 compute-0 systemd[1]: libpod-conmon-7e447147dd22d03f3bcd761f74a061c218708b245429efdacd0c36760021d7d3.scope: Deactivated successfully.
Jan 27 08:30:39 compute-0 sudo[89597]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[89924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[89924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[89924]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[89949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:30:39 compute-0 sudo[89949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[89949]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[89974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2215821541; not ready for session (expect reconnect)
Jan 27 08:30:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:39 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:39 compute-0 sudo[89974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:30:39 compute-0 sudo[89974]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdjedfhlxmecjffkjqcmqadnzllswfsv ; /usr/bin/python3'
Jan 27 08:30:39 compute-0 sudo[90023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:39 compute-0 sudo[90022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new
Jan 27 08:30:39 compute-0 sudo[90022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90022]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:39 compute-0 sudo[90050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[90050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90050]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 27 08:30:39 compute-0 sudo[90075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90075]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:39 compute-0 python3[90039]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:39 compute-0 sudo[90100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[90100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 podman[90101]: 2026-01-27 08:30:39.493693597 +0000 UTC m=+0.034470378 container create 12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06 (image=quay.io/ceph/ceph:v18, name=bold_jepsen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 08:30:39 compute-0 sudo[90100]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 systemd[1]: Started libpod-conmon-12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06.scope.
Jan 27 08:30:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e5d68759e53baa3974fdd431f0b3c30cb4ad5f85dfa5a7e5baee83086266e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e5d68759e53baa3974fdd431f0b3c30cb4ad5f85dfa5a7e5baee83086266e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e5d68759e53baa3974fdd431f0b3c30cb4ad5f85dfa5a7e5baee83086266e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:39 compute-0 sudo[90138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config
Jan 27 08:30:39 compute-0 sudo[90138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90138]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 podman[90101]: 2026-01-27 08:30:39.562496119 +0000 UTC m=+0.103272930 container init 12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06 (image=quay.io/ceph/ceph:v18, name=bold_jepsen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:39 compute-0 podman[90101]: 2026-01-27 08:30:39.568656768 +0000 UTC m=+0.109433549 container start 12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06 (image=quay.io/ceph/ceph:v18, name=bold_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:30:39 compute-0 podman[90101]: 2026-01-27 08:30:39.479369054 +0000 UTC m=+0.020145865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:39 compute-0 podman[90101]: 2026-01-27 08:30:39.575702251 +0000 UTC m=+0.116479032 container attach 12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06 (image=quay.io/ceph/ceph:v18, name=bold_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 08:30:39 compute-0 sudo[90169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[90169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90169]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:39 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:39 compute-0 sudo[90194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config
Jan 27 08:30:39 compute-0 sudo[90194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90194]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[90219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90219]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:30:39 compute-0 sudo[90244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90244]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[90269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90269]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:30:39 compute-0 sudo[90294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90294]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:39 compute-0 sudo[90338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:39 compute-0 sudo[90338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:39 compute-0 sudo[90338]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 sudo[90363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:30:40 compute-0 sudo[90363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90363]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 27 08:30:40 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1751593799' entity='client.admin' 
Jan 27 08:30:40 compute-0 ceph-mon[74357]: OSD bench result of 9747.923570 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:30:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:40 compute-0 ceph-mon[74357]: Updating compute-1:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:40 compute-0 ceph-mon[74357]: Updating compute-0:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:40 compute-0 ceph-mon[74357]: Updating compute-2:/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:40 compute-0 sudo[90411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:40 compute-0 sudo[90411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90411]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2215821541,v1:192.168.122.102:6801/2215821541] boot
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.101171494s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702796936s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.101091385s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702796936s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.971578598s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.573623657s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.971504450s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.573600769s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.100738764s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702857971s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.971517086s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.573623657s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.971454859s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.573600769s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.100689888s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702857971s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.100512505s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702827454s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.100484610s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.702827454s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.100860119s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703346252s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977456808s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.579963684s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977560043s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580093384s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977414131s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.579963684s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977534056s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580093384s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.101175070s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703773499s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977465391s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580093384s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.101153374s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703773499s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977445602s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580093384s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.101332903s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.704048157s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977420092s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580162048s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.101312876s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.704048157s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/16 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=3.977403879s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.580162048s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.100546122s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703346252s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.098846674s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703140259s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:30:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 34 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=3.098786116s) [2] r=-1 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.703140259s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 27 08:30:40 compute-0 sudo[90436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:30:40 compute-0 sudo[90436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90436]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 bold_jepsen[90152]: Scheduled rgw.rgw update...
Jan 27 08:30:40 compute-0 bold_jepsen[90152]: Scheduled ingress.rgw.default update...
Jan 27 08:30:40 compute-0 systemd[1]: libpod-12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06.scope: Deactivated successfully.
Jan 27 08:30:40 compute-0 podman[90101]: 2026-01-27 08:30:40.221982806 +0000 UTC m=+0.762759577 container died 12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06 (image=quay.io/ceph/ceph:v18, name=bold_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:40 compute-0 sudo[90462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:40 compute-0 sudo[90462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90462]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d84e5d68759e53baa3974fdd431f0b3c30cb4ad5f85dfa5a7e5baee83086266e-merged.mount: Deactivated successfully.
Jan 27 08:30:40 compute-0 podman[90101]: 2026-01-27 08:30:40.271751044 +0000 UTC m=+0.812527825 container remove 12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06 (image=quay.io/ceph/ceph:v18, name=bold_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:40 compute-0 systemd[1]: libpod-conmon-12d7c93893b2004944743ada5787ca86b2c989281a8e1b344b67cfd4c1abac06.scope: Deactivated successfully.
Jan 27 08:30:40 compute-0 sudo[90023]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 sudo[90495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new
Jan 27 08:30:40 compute-0 sudo[90495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90495]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:30:40 compute-0 sudo[90527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:40 compute-0 sudo[90527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90527]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 sudo[90552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-281e9bde-2795-59f4-98ac-90cf5b49a2de/var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf.new /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/config/ceph.conf
Jan 27 08:30:40 compute-0 sudo[90552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90552]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v99: 69 pgs: 24 peering, 45 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d08a157d-024e-493a-b5f3-50156ae1e9d2 does not exist
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8f62b221-5163-4fb9-acdc-0d7b64afe53a does not exist
Jan 27 08:30:40 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5de5cb10-32bc-4c1a-8271-77ea04b622da does not exist
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:30:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:40 compute-0 sudo[90577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:40 compute-0 sudo[90577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90577]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 sudo[90602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:40 compute-0 sudo[90602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90602]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 sudo[90627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:40 compute-0 sudo[90627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:40 compute-0 sudo[90627]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:40 compute-0 sudo[90652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:30:40 compute-0 sudo[90652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:30:41 compute-0 ceph-mon[74357]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:41 compute-0 ceph-mon[74357]: osd.2 [v2:192.168.122.102:6800/2215821541,v1:192.168.122.102:6801/2215821541] boot
Jan 27 08:30:41 compute-0 ceph-mon[74357]: osdmap e34: 3 total, 3 up, 3 in
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: Saving service ingress.rgw.default spec with placement count:2
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: pgmap v99: 69 pgs: 24 peering, 45 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:30:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 27 08:30:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 27 08:30:41 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.281151548 +0000 UTC m=+0.037874242 container create 476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_agnesi, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:30:41 compute-0 systemd[1]: Started libpod-conmon-476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070.scope.
Jan 27 08:30:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.345267361 +0000 UTC m=+0.101990075 container init 476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.349824946 +0000 UTC m=+0.106547650 container start 476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:30:41 compute-0 blissful_agnesi[90808]: 167 167
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.353040765 +0000 UTC m=+0.109763459 container attach 476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:30:41 compute-0 systemd[1]: libpod-476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070.scope: Deactivated successfully.
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.353722743 +0000 UTC m=+0.110445437 container died 476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.263258617 +0000 UTC m=+0.019981331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e27140a5f4db00a6691a17faa0fc5366da202f362148baffc2ad9c3000f00246-merged.mount: Deactivated successfully.
Jan 27 08:30:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:41 compute-0 podman[90768]: 2026-01-27 08:30:41.387994255 +0000 UTC m=+0.144716949 container remove 476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_agnesi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:30:41 compute-0 systemd[1]: libpod-conmon-476d0c7ea209394d745959d2978344fe41efb8e32e9004b3359f912f0dc28070.scope: Deactivated successfully.
Jan 27 08:30:41 compute-0 python3[90805]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:30:41 compute-0 podman[90837]: 2026-01-27 08:30:41.537995178 +0000 UTC m=+0.040164795 container create 3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:30:41 compute-0 systemd[1]: Started libpod-conmon-3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4.scope.
Jan 27 08:30:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6c0c0d2e8699b2cf85c631cfc7fa95eabd3f8496b4d53e8db3728e8d6271f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6c0c0d2e8699b2cf85c631cfc7fa95eabd3f8496b4d53e8db3728e8d6271f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6c0c0d2e8699b2cf85c631cfc7fa95eabd3f8496b4d53e8db3728e8d6271f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6c0c0d2e8699b2cf85c631cfc7fa95eabd3f8496b4d53e8db3728e8d6271f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6c0c0d2e8699b2cf85c631cfc7fa95eabd3f8496b4d53e8db3728e8d6271f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:41 compute-0 podman[90837]: 2026-01-27 08:30:41.52130315 +0000 UTC m=+0.023472767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:41 compute-0 podman[90837]: 2026-01-27 08:30:41.627785316 +0000 UTC m=+0.129954933 container init 3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:41 compute-0 podman[90837]: 2026-01-27 08:30:41.634083489 +0000 UTC m=+0.136253106 container start 3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:41 compute-0 podman[90837]: 2026-01-27 08:30:41.641202955 +0000 UTC m=+0.143372572 container attach 3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:30:41 compute-0 python3[90921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502641.16441-37343-211997946477097/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:30:42 compute-0 sudo[90971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqnuwidpiiyrtzkuemzhjlqworxlrzjv ; /usr/bin/python3'
Jan 27 08:30:42 compute-0 sudo[90971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:42 compute-0 ceph-mon[74357]: osdmap e35: 3 total, 3 up, 3 in
Jan 27 08:30:42 compute-0 python3[90973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:42 compute-0 podman[90974]: 2026-01-27 08:30:42.357040831 +0000 UTC m=+0.071064284 container create a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693 (image=quay.io/ceph/ceph:v18, name=cranky_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:42 compute-0 systemd[1]: Started libpod-conmon-a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693.scope.
Jan 27 08:30:42 compute-0 podman[90974]: 2026-01-27 08:30:42.310062889 +0000 UTC m=+0.024086352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:42 compute-0 gifted_heisenberg[90895]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:30:42 compute-0 gifted_heisenberg[90895]: --> relative data size: 1.0
Jan 27 08:30:42 compute-0 gifted_heisenberg[90895]: --> All data devices are unavailable
Jan 27 08:30:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94990cdd1102a68e5cec1856feeb3824b167584bb62838a128c22c444309fd15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94990cdd1102a68e5cec1856feeb3824b167584bb62838a128c22c444309fd15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94990cdd1102a68e5cec1856feeb3824b167584bb62838a128c22c444309fd15/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:42 compute-0 systemd[1]: libpod-3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4.scope: Deactivated successfully.
Jan 27 08:30:42 compute-0 podman[90974]: 2026-01-27 08:30:42.454173871 +0000 UTC m=+0.168197314 container init a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693 (image=quay.io/ceph/ceph:v18, name=cranky_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:42 compute-0 podman[90837]: 2026-01-27 08:30:42.455368583 +0000 UTC m=+0.957538190 container died 3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:30:42 compute-0 podman[90974]: 2026-01-27 08:30:42.461602445 +0000 UTC m=+0.175625888 container start a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693 (image=quay.io/ceph/ceph:v18, name=cranky_moser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:30:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v101: 69 pgs: 24 peering, 45 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-de6c0c0d2e8699b2cf85c631cfc7fa95eabd3f8496b4d53e8db3728e8d6271f4-merged.mount: Deactivated successfully.
Jan 27 08:30:42 compute-0 podman[90837]: 2026-01-27 08:30:42.775137193 +0000 UTC m=+1.277306810 container remove 3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:42 compute-0 podman[90974]: 2026-01-27 08:30:42.794609918 +0000 UTC m=+0.508633361 container attach a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693 (image=quay.io/ceph/ceph:v18, name=cranky_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:42 compute-0 sudo[90652]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:42 compute-0 systemd[1]: libpod-conmon-3a5437aae44a0e94d0165e913c97a291550bc00308e71f28352f2322c681fca4.scope: Deactivated successfully.
Jan 27 08:30:42 compute-0 sudo[91032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:42 compute-0 sudo[91032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:42 compute-0 sudo[91032]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:42 compute-0 sudo[91060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:42 compute-0 sudo[91060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:42 compute-0 sudo[91060]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:42 compute-0 sudo[91085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:42 compute-0 sudo[91085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:42 compute-0 sudo[91085]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:43 compute-0 sudo[91110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:30:43 compute-0 sudo[91110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:43 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mgr[74650]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 27 08:30:43 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0[74353]: 2026-01-27T08:30:43.080+0000 7f4bcc234640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e2 new map
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:30:43.081110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 27 08:30:43 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:43 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 27 08:30:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:43 compute-0 ceph-mgr[74650]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 27 08:30:43 compute-0 systemd[1]: libpod-a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693.scope: Deactivated successfully.
Jan 27 08:30:43 compute-0 podman[90974]: 2026-01-27 08:30:43.251873186 +0000 UTC m=+0.965896639 container died a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693 (image=quay.io/ceph/ceph:v18, name=cranky_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-94990cdd1102a68e5cec1856feeb3824b167584bb62838a128c22c444309fd15-merged.mount: Deactivated successfully.
Jan 27 08:30:43 compute-0 podman[90974]: 2026-01-27 08:30:43.29490632 +0000 UTC m=+1.008929763 container remove a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693 (image=quay.io/ceph/ceph:v18, name=cranky_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:30:43 compute-0 systemd[1]: libpod-conmon-a80f1388d4671339ebf7b93098ebac7bad91dfafae7667d9a3c1363c60359693.scope: Deactivated successfully.
Jan 27 08:30:43 compute-0 sudo[90971]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:43 compute-0 ceph-mon[74357]: 3.6 scrub starts
Jan 27 08:30:43 compute-0 ceph-mon[74357]: 3.6 scrub ok
Jan 27 08:30:43 compute-0 ceph-mon[74357]: pgmap v101: 69 pgs: 24 peering, 45 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 27 08:30:43 compute-0 ceph-mon[74357]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 27 08:30:43 compute-0 ceph-mon[74357]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 27 08:30:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 27 08:30:43 compute-0 ceph-mon[74357]: osdmap e36: 3 total, 3 up, 3 in
Jan 27 08:30:43 compute-0 ceph-mon[74357]: fsmap cephfs:0
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.404095011 +0000 UTC m=+0.047708342 container create c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:30:43 compute-0 systemd[1]: Started libpod-conmon-c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b.scope.
Jan 27 08:30:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.380926815 +0000 UTC m=+0.024540246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:43 compute-0 sudo[91227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzxwijydhibvebhveefhpthsylmomptt ; /usr/bin/python3'
Jan 27 08:30:43 compute-0 sudo[91227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.486684731 +0000 UTC m=+0.130298102 container init c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.491512374 +0000 UTC m=+0.135125715 container start c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.494318491 +0000 UTC m=+0.137931832 container attach c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 08:30:43 compute-0 goofy_sinoussi[91226]: 167 167
Jan 27 08:30:43 compute-0 systemd[1]: libpod-c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b.scope: Deactivated successfully.
Jan 27 08:30:43 compute-0 conmon[91226]: conmon c200a0c6818a1fb2cac3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b.scope/container/memory.events
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.497620281 +0000 UTC m=+0.141233632 container died c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a057caf346082342c695abdf1c4472650cf18f745e827e6a75645af35860e2f7-merged.mount: Deactivated successfully.
Jan 27 08:30:43 compute-0 podman[91187]: 2026-01-27 08:30:43.530942937 +0000 UTC m=+0.174556298 container remove c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:30:43 compute-0 systemd[1]: libpod-conmon-c200a0c6818a1fb2cac31531415b71fdc39574be98f930668d93a845f8de391b.scope: Deactivated successfully.
Jan 27 08:30:43 compute-0 python3[91231]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:43 compute-0 podman[91252]: 2026-01-27 08:30:43.668994572 +0000 UTC m=+0.035690762 container create d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lovelace, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:43 compute-0 podman[91249]: 2026-01-27 08:30:43.672595541 +0000 UTC m=+0.040724880 container create 2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6 (image=quay.io/ceph/ceph:v18, name=angry_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:43 compute-0 systemd[1]: Started libpod-conmon-2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6.scope.
Jan 27 08:30:43 compute-0 systemd[1]: Started libpod-conmon-d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5.scope.
Jan 27 08:30:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7617cd2444601b38fb1df8b8e210e20954647da3ee2489528925b5b9a2487f57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7617cd2444601b38fb1df8b8e210e20954647da3ee2489528925b5b9a2487f57/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7617cd2444601b38fb1df8b8e210e20954647da3ee2489528925b5b9a2487f57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f85c2d71050243798b246cb85bc2f2bb274eda0d49e48dcd6226d3af0ac89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f85c2d71050243798b246cb85bc2f2bb274eda0d49e48dcd6226d3af0ac89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f85c2d71050243798b246cb85bc2f2bb274eda0d49e48dcd6226d3af0ac89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f85c2d71050243798b246cb85bc2f2bb274eda0d49e48dcd6226d3af0ac89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:43 compute-0 podman[91249]: 2026-01-27 08:30:43.736661473 +0000 UTC m=+0.104790812 container init 2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6 (image=quay.io/ceph/ceph:v18, name=angry_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:43 compute-0 podman[91252]: 2026-01-27 08:30:43.74020972 +0000 UTC m=+0.106905940 container init d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:30:43 compute-0 podman[91249]: 2026-01-27 08:30:43.74714435 +0000 UTC m=+0.115273689 container start 2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6 (image=quay.io/ceph/ceph:v18, name=angry_wescoff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:43 compute-0 podman[91252]: 2026-01-27 08:30:43.65288985 +0000 UTC m=+0.019586060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:43 compute-0 podman[91252]: 2026-01-27 08:30:43.749902846 +0000 UTC m=+0.116599036 container start d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:30:43 compute-0 podman[91249]: 2026-01-27 08:30:43.75075913 +0000 UTC m=+0.118888499 container attach 2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6 (image=quay.io/ceph/ceph:v18, name=angry_wescoff, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:30:43 compute-0 podman[91249]: 2026-01-27 08:30:43.654989817 +0000 UTC m=+0.023119176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:43 compute-0 podman[91252]: 2026-01-27 08:30:43.753720651 +0000 UTC m=+0.120416861 container attach d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lovelace, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 08:30:43 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 27 08:30:43 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 27 08:30:44 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:30:44 compute-0 ceph-mgr[74650]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:44 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 27 08:30:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:44 compute-0 angry_wescoff[91282]: Scheduled mds.cephfs update...
Jan 27 08:30:44 compute-0 systemd[1]: libpod-2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6.scope: Deactivated successfully.
Jan 27 08:30:44 compute-0 podman[91312]: 2026-01-27 08:30:44.423382458 +0000 UTC m=+0.023743614 container died 2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6 (image=quay.io/ceph/ceph:v18, name=angry_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:44 compute-0 ceph-mon[74357]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:30:44 compute-0 ceph-mon[74357]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:44 compute-0 ceph-mon[74357]: 3.7 scrub starts
Jan 27 08:30:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:44 compute-0 ceph-mon[74357]: 3.7 scrub ok
Jan 27 08:30:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7617cd2444601b38fb1df8b8e210e20954647da3ee2489528925b5b9a2487f57-merged.mount: Deactivated successfully.
Jan 27 08:30:44 compute-0 podman[91312]: 2026-01-27 08:30:44.49768762 +0000 UTC m=+0.098048766 container remove 2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6 (image=quay.io/ceph/ceph:v18, name=angry_wescoff, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:44 compute-0 systemd[1]: libpod-conmon-2f3525455de4d5205695aed2452d7b6c477deac206f4efd40c02e67b666ed0a6.scope: Deactivated successfully.
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]: {
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:     "0": [
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:         {
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "devices": [
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "/dev/loop3"
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             ],
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "lv_name": "ceph_lv0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "lv_size": "7511998464",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "name": "ceph_lv0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "tags": {
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.cluster_name": "ceph",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.crush_device_class": "",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.encrypted": "0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.osd_id": "0",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.type": "block",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:                 "ceph.vdo": "0"
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             },
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "type": "block",
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:             "vg_name": "ceph_vg0"
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:         }
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]:     ]
Jan 27 08:30:44 compute-0 vibrant_lovelace[91284]: }
Jan 27 08:30:44 compute-0 sudo[91227]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:44 compute-0 systemd[1]: libpod-d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5.scope: Deactivated successfully.
Jan 27 08:30:44 compute-0 podman[91252]: 2026-01-27 08:30:44.542130792 +0000 UTC m=+0.908826982 container died d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7f85c2d71050243798b246cb85bc2f2bb274eda0d49e48dcd6226d3af0ac89-merged.mount: Deactivated successfully.
Jan 27 08:30:44 compute-0 podman[91252]: 2026-01-27 08:30:44.644163506 +0000 UTC m=+1.010859696 container remove d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:30:44 compute-0 systemd[1]: libpod-conmon-d199367f77995e763de1863b5aff9a9609ca5c132143ce220a253533bd23ffe5.scope: Deactivated successfully.
Jan 27 08:30:44 compute-0 sudo[91110]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v103: 69 pgs: 24 peering, 45 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:44 compute-0 sudo[91343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:44 compute-0 sudo[91343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:44 compute-0 sudo[91343]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:44 compute-0 sudo[91368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:44 compute-0 sudo[91368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:44 compute-0 sudo[91368]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:44 compute-0 sudo[91393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:44 compute-0 sudo[91393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:44 compute-0 sudo[91393]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:44 compute-0 sudo[91418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:30:44 compute-0 sudo[91418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.197014152 +0000 UTC m=+0.047169487 container create b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:30:45 compute-0 systemd[1]: Started libpod-conmon-b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d.scope.
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.173753313 +0000 UTC m=+0.023908678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.300643261 +0000 UTC m=+0.150798606 container init b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.307128539 +0000 UTC m=+0.157283884 container start b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:45 compute-0 sad_tesla[91500]: 167 167
Jan 27 08:30:45 compute-0 systemd[1]: libpod-b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d.scope: Deactivated successfully.
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.35119297 +0000 UTC m=+0.201348325 container attach b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tesla, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.35154275 +0000 UTC m=+0.201698105 container died b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 27 08:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5df98e28dc633199473d9a3354093da2fc11dbc72fb5b081a32a6a53c446656-merged.mount: Deactivated successfully.
Jan 27 08:30:45 compute-0 podman[91483]: 2026-01-27 08:30:45.41049113 +0000 UTC m=+0.260646475 container remove b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 08:30:45 compute-0 systemd[1]: libpod-conmon-b2ef1d3507a0d1dd58c692cfc30d987b351717844c3169220e5154466159b55d.scope: Deactivated successfully.
Jan 27 08:30:45 compute-0 ceph-mon[74357]: 2.17 scrub starts
Jan 27 08:30:45 compute-0 ceph-mon[74357]: 2.17 scrub ok
Jan 27 08:30:45 compute-0 ceph-mon[74357]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:30:45 compute-0 ceph-mon[74357]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:45 compute-0 ceph-mon[74357]: pgmap v103: 69 pgs: 24 peering, 45 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:45 compute-0 podman[91576]: 2026-01-27 08:30:45.55966965 +0000 UTC m=+0.042201291 container create 3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:45 compute-0 sudo[91613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doudwfmeompujwjbaipvjjupbepptjsu ; /usr/bin/python3'
Jan 27 08:30:45 compute-0 sudo[91613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:45 compute-0 systemd[1]: Started libpod-conmon-3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d.scope.
Jan 27 08:30:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080ad12a65cd7bdf15f57ff7513bcc975c45d6d0517cff695bbb879faa988c0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080ad12a65cd7bdf15f57ff7513bcc975c45d6d0517cff695bbb879faa988c0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080ad12a65cd7bdf15f57ff7513bcc975c45d6d0517cff695bbb879faa988c0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080ad12a65cd7bdf15f57ff7513bcc975c45d6d0517cff695bbb879faa988c0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:45 compute-0 podman[91576]: 2026-01-27 08:30:45.544585316 +0000 UTC m=+0.027116987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:45 compute-0 podman[91576]: 2026-01-27 08:30:45.645460019 +0000 UTC m=+0.127991700 container init 3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:45 compute-0 podman[91576]: 2026-01-27 08:30:45.658115536 +0000 UTC m=+0.140647187 container start 3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:30:45 compute-0 podman[91576]: 2026-01-27 08:30:45.661232303 +0000 UTC m=+0.143763974 container attach 3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:45 compute-0 python3[91618]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:30:45 compute-0 sudo[91613]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:45 compute-0 sudo[91694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgtvivnvdbkeskpkgonxjjigouoeypnu ; /usr/bin/python3'
Jan 27 08:30:45 compute-0 sudo[91694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:46 compute-0 python3[91696]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769502645.432104-37395-11095649187146/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=a78720c7651b641fc0d432dbe481248898ae80a6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:30:46 compute-0 sudo[91694]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:46 compute-0 sudo[91749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuskbrrxdaskcoxoorlzheesftyuqfek ; /usr/bin/python3'
Jan 27 08:30:46 compute-0 sudo[91749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:46 compute-0 ceph-mon[74357]: 3.b scrub starts
Jan 27 08:30:46 compute-0 ceph-mon[74357]: 3.b scrub ok
Jan 27 08:30:46 compute-0 vigorous_galois[91619]: {
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:         "osd_id": 0,
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:         "type": "bluestore"
Jan 27 08:30:46 compute-0 vigorous_galois[91619]:     }
Jan 27 08:30:46 compute-0 vigorous_galois[91619]: }
Jan 27 08:30:46 compute-0 python3[91751]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:46 compute-0 systemd[1]: libpod-3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d.scope: Deactivated successfully.
Jan 27 08:30:46 compute-0 podman[91576]: 2026-01-27 08:30:46.551297247 +0000 UTC m=+1.033828918 container died 3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-080ad12a65cd7bdf15f57ff7513bcc975c45d6d0517cff695bbb879faa988c0b-merged.mount: Deactivated successfully.
Jan 27 08:30:46 compute-0 podman[91576]: 2026-01-27 08:30:46.611020889 +0000 UTC m=+1.093552540 container remove 3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galois, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:46 compute-0 systemd[1]: libpod-conmon-3a76309c162f3a46b96948b875e2cf2d90135a66c76b4fb7423361dc4f161c8d.scope: Deactivated successfully.
Jan 27 08:30:46 compute-0 podman[91763]: 2026-01-27 08:30:46.632340664 +0000 UTC m=+0.072546735 container create cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852 (image=quay.io/ceph/ceph:v18, name=flamboyant_johnson, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:46 compute-0 sudo[91418]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:30:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:30:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:46 compute-0 systemd[1]: Started libpod-conmon-cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852.scope.
Jan 27 08:30:46 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 20ee4094-cd44-4406-abb2-e5c47356d92c (Updating rgw.rgw deployment (+3 -> 3))
Jan 27 08:30:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.igzbmp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 27 08:30:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.igzbmp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 27 08:30:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v104: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.igzbmp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:30:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 27 08:30:46 compute-0 podman[91763]: 2026-01-27 08:30:46.603346128 +0000 UTC m=+0.043552229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:46 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f97f3de1f7bc8b807c4e6509598bc383b171349fb58391ebf1a92053b44c237/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f97f3de1f7bc8b807c4e6509598bc383b171349fb58391ebf1a92053b44c237/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:46 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.igzbmp on compute-2
Jan 27 08:30:46 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.igzbmp on compute-2
Jan 27 08:30:46 compute-0 podman[91763]: 2026-01-27 08:30:46.716438356 +0000 UTC m=+0.156644447 container init cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852 (image=quay.io/ceph/ceph:v18, name=flamboyant_johnson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:46 compute-0 podman[91763]: 2026-01-27 08:30:46.72276801 +0000 UTC m=+0.162974081 container start cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852 (image=quay.io/ceph/ceph:v18, name=flamboyant_johnson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:46 compute-0 podman[91763]: 2026-01-27 08:30:46.725759163 +0000 UTC m=+0.165965304 container attach cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852 (image=quay.io/ceph/ceph:v18, name=flamboyant_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 27 08:30:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 27 08:30:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/138299093' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 27 08:30:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/138299093' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 27 08:30:47 compute-0 systemd[1]: libpod-cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852.scope: Deactivated successfully.
Jan 27 08:30:47 compute-0 podman[91763]: 2026-01-27 08:30:47.326782545 +0000 UTC m=+0.766988616 container died cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852 (image=quay.io/ceph/ceph:v18, name=flamboyant_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f97f3de1f7bc8b807c4e6509598bc383b171349fb58391ebf1a92053b44c237-merged.mount: Deactivated successfully.
Jan 27 08:30:47 compute-0 podman[91763]: 2026-01-27 08:30:47.388637072 +0000 UTC m=+0.828843163 container remove cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852 (image=quay.io/ceph/ceph:v18, name=flamboyant_johnson, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:47 compute-0 systemd[1]: libpod-conmon-cfa7583f112fb44ae988ec42c33a61b2a34c94e4e649a7b2b8c06a8782f6d852.scope: Deactivated successfully.
Jan 27 08:30:47 compute-0 sudo[91749]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.igzbmp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 27 08:30:47 compute-0 ceph-mon[74357]: pgmap v104: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.igzbmp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:47 compute-0 ceph-mon[74357]: Deploying daemon rgw.rgw.compute-2.igzbmp on compute-2
Jan 27 08:30:47 compute-0 ceph-mon[74357]: 3.9 scrub starts
Jan 27 08:30:47 compute-0 ceph-mon[74357]: 3.9 scrub ok
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/138299093' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 27 08:30:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/138299093' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 27 08:30:47 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 27 08:30:47 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 27 08:30:47 compute-0 sudo[91852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksulysiitpxtswglqreghkjriufxvopt ; /usr/bin/python3'
Jan 27 08:30:47 compute-0 sudo[91852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:48 compute-0 python3[91854]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.151345021 +0000 UTC m=+0.039197916 container create c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846 (image=quay.io/ceph/ceph:v18, name=suspicious_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:48 compute-0 systemd[1]: Started libpod-conmon-c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846.scope.
Jan 27 08:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49de57872e510233a2d55d4a00302a1e6f4c03eea4b254c77a484dbf21954a03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49de57872e510233a2d55d4a00302a1e6f4c03eea4b254c77a484dbf21954a03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.134271705 +0000 UTC m=+0.022124620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.23234167 +0000 UTC m=+0.120194565 container init c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846 (image=quay.io/ceph/ceph:v18, name=suspicious_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.237521455 +0000 UTC m=+0.125374350 container start c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846 (image=quay.io/ceph/ceph:v18, name=suspicious_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.243532902 +0000 UTC m=+0.131385817 container attach c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846 (image=quay.io/ceph/ceph:v18, name=suspicious_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:30:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:30:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v105: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 27 08:30:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.nigpsg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 27 08:30:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.nigpsg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 27 08:30:48 compute-0 ceph-mon[74357]: 2.1a scrub starts
Jan 27 08:30:48 compute-0 ceph-mon[74357]: 2.1a scrub ok
Jan 27 08:30:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.nigpsg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:30:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 27 08:30:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 27 08:30:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175383581' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:30:48 compute-0 suspicious_driscoll[91872]: 
Jan 27 08:30:48 compute-0 suspicious_driscoll[91872]: {"fsid":"281e9bde-2795-59f4-98ac-90cf5b49a2de","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":40,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":3,"osd_up_since":1769502640,"num_in_osds":3,"osd_in_since":1769502615,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":69}],"num_pgs":69,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83980288,"bytes_avail":22452015104,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-27T08:30:36.669992+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.jqbgxp":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.cbywrc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"20ee4094-cd44-4406-abb2-e5c47356d92c":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 27 08:30:48 compute-0 systemd[1]: libpod-c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846.scope: Deactivated successfully.
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.836528573 +0000 UTC m=+0.724381468 container died c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846 (image=quay.io/ceph/ceph:v18, name=suspicious_driscoll, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:30:48 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Jan 27 08:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-49de57872e510233a2d55d4a00302a1e6f4c03eea4b254c77a484dbf21954a03-merged.mount: Deactivated successfully.
Jan 27 08:30:48 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Jan 27 08:30:48 compute-0 podman[91856]: 2026-01-27 08:30:48.881638113 +0000 UTC m=+0.769491008 container remove c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846 (image=quay.io/ceph/ceph:v18, name=suspicious_driscoll, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:48 compute-0 systemd[1]: libpod-conmon-c1a157a4ef7c07f709ee1e06cdd078c4cc6231d71ade726455b0568ba1597846.scope: Deactivated successfully.
Jan 27 08:30:48 compute-0 sudo[91852]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:49 compute-0 sudo[91936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfnxwrnlitapnzjcmenadtmrnksdkioy ; /usr/bin/python3'
Jan 27 08:30:49 compute-0 sudo[91936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:49 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:49 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:49 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.nigpsg on compute-1
Jan 27 08:30:49 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.nigpsg on compute-1
Jan 27 08:30:49 compute-0 python3[91938]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:49 compute-0 podman[91939]: 2026-01-27 08:30:49.281035559 +0000 UTC m=+0.037296757 container create d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:30:49 compute-0 systemd[1]: Started libpod-conmon-d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1.scope.
Jan 27 08:30:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b282b436eb3e5795f84fc4991e8b637b65833cff23c6c6f03e8daf73ea283ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b282b436eb3e5795f84fc4991e8b637b65833cff23c6c6f03e8daf73ea283ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:49 compute-0 podman[91939]: 2026-01-27 08:30:49.355301741 +0000 UTC m=+0.111562959 container init d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:49 compute-0 podman[91939]: 2026-01-27 08:30:49.263187202 +0000 UTC m=+0.019448430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:49 compute-0 podman[91939]: 2026-01-27 08:30:49.361400831 +0000 UTC m=+0.117662029 container start d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:30:49 compute-0 podman[91939]: 2026-01-27 08:30:49.402136326 +0000 UTC m=+0.158397604 container attach d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 27 08:30:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 27 08:30:49 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 27 08:30:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 27 08:30:49 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 27 08:30:49 compute-0 ceph-mon[74357]: pgmap v105: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:49 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:49 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.nigpsg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 27 08:30:49 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.nigpsg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:30:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1175383581' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:30:49 compute-0 ceph-mon[74357]: 2.1d scrub starts
Jan 27 08:30:49 compute-0 ceph-mon[74357]: 3.1c deep-scrub starts
Jan 27 08:30:49 compute-0 ceph-mon[74357]: 2.1d scrub ok
Jan 27 08:30:49 compute-0 ceph-mon[74357]: 3.1c deep-scrub ok
Jan 27 08:30:49 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:49 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 27 08:30:49 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959970971' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 27 08:30:49 compute-0 optimistic_napier[91954]: 
Jan 27 08:30:49 compute-0 optimistic_napier[91954]: {"epoch":3,"fsid":"281e9bde-2795-59f4-98ac-90cf5b49a2de","modified":"2026-01-27T08:30:03.071633Z","created":"2026-01-27T08:27:20.330725Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 27 08:30:49 compute-0 optimistic_napier[91954]: dumped monmap epoch 3
Jan 27 08:30:49 compute-0 systemd[1]: libpod-d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1.scope: Deactivated successfully.
Jan 27 08:30:49 compute-0 podman[91939]: 2026-01-27 08:30:49.994425058 +0000 UTC m=+0.750686276 container died d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b282b436eb3e5795f84fc4991e8b637b65833cff23c6c6f03e8daf73ea283ba-merged.mount: Deactivated successfully.
Jan 27 08:30:50 compute-0 podman[91939]: 2026-01-27 08:30:50.293778668 +0000 UTC m=+1.050039886 container remove d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:50 compute-0 sudo[91936]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:50 compute-0 systemd[1]: libpod-conmon-d96b207fb3a5a4aa40fe0efac728f523bdea25ce425d20fd4a7ac21fa75505e1.scope: Deactivated successfully.
Jan 27 08:30:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:30:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v107: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:30:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:50 compute-0 sudo[92014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhoojglinaorxxsverwezjwgvjuffzkc ; /usr/bin/python3'
Jan 27 08:30:50 compute-0 sudo[92014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 27 08:30:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dkphsh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 27 08:30:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dkphsh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 27 08:30:50 compute-0 python3[92016]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:50 compute-0 ceph-mon[74357]: Deploying daemon rgw.rgw.compute-1.nigpsg on compute-1
Jan 27 08:30:50 compute-0 ceph-mon[74357]: osdmap e37: 3 total, 3 up, 3 in
Jan 27 08:30:50 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1947759147' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 27 08:30:50 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 27 08:30:50 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3959970971' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 27 08:30:50 compute-0 ceph-mon[74357]: pgmap v107: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:51 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 27 08:30:51 compute-0 ceph-mon[74357]: osdmap e38: 3 total, 3 up, 3 in
Jan 27 08:30:51 compute-0 podman[92017]: 2026-01-27 08:30:51.053513368 +0000 UTC m=+0.079444038 container create 221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:51 compute-0 podman[92017]: 2026-01-27 08:30:50.99316047 +0000 UTC m=+0.019091160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dkphsh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 27 08:30:51 compute-0 systemd[1]: Started libpod-conmon-221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370.scope.
Jan 27 08:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21de380376e071bf134db73624615dd6830b9710efde2bdffe13a93fa972ef8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21de380376e071bf134db73624615dd6830b9710efde2bdffe13a93fa972ef8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:51 compute-0 podman[92017]: 2026-01-27 08:30:51.213647968 +0000 UTC m=+0.239578638 container init 221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:51 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dkphsh on compute-0
Jan 27 08:30:51 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dkphsh on compute-0
Jan 27 08:30:51 compute-0 podman[92017]: 2026-01-27 08:30:51.219245344 +0000 UTC m=+0.245176004 container start 221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:51 compute-0 podman[92017]: 2026-01-27 08:30:51.240326455 +0000 UTC m=+0.266257125 container attach 221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:51 compute-0 sudo[92040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:51 compute-0 sudo[92040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:51 compute-0 sudo[92040]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:51 compute-0 sudo[92065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:51 compute-0 sudo[92065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:51 compute-0 sudo[92065]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:51 compute-0 sudo[92090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:51 compute-0 sudo[92090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:51 compute-0 sudo[92090]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:51 compute-0 sudo[92115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:30:51 compute-0 sudo[92115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:51 compute-0 podman[92198]: 2026-01-27 08:30:51.746757621 +0000 UTC m=+0.059529347 container create 07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:30:51 compute-0 podman[92198]: 2026-01-27 08:30:51.708202283 +0000 UTC m=+0.020973999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:51 compute-0 systemd[1]: Started libpod-conmon-07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a.scope.
Jan 27 08:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2611789114' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 27 08:30:51 compute-0 hopeful_murdock[92036]: [client.openstack]
Jan 27 08:30:51 compute-0 hopeful_murdock[92036]:         key = AQDBdnhpAAAAABAAc3H+hLFskAdXtvnwUr6AEQ==
Jan 27 08:30:51 compute-0 hopeful_murdock[92036]:         caps mgr = "allow *"
Jan 27 08:30:51 compute-0 hopeful_murdock[92036]:         caps mon = "profile rbd"
Jan 27 08:30:51 compute-0 hopeful_murdock[92036]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 27 08:30:51 compute-0 podman[92198]: 2026-01-27 08:30:51.853240537 +0000 UTC m=+0.166012253 container init 07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:30:51 compute-0 podman[92198]: 2026-01-27 08:30:51.858005911 +0000 UTC m=+0.170777627 container start 07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dhawan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:51 compute-0 affectionate_dhawan[92214]: 167 167
Jan 27 08:30:51 compute-0 systemd[1]: libpod-07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a.scope: Deactivated successfully.
Jan 27 08:30:51 compute-0 systemd[1]: libpod-221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370.scope: Deactivated successfully.
Jan 27 08:30:51 compute-0 podman[92017]: 2026-01-27 08:30:51.862120129 +0000 UTC m=+0.888050789 container died 221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:51 compute-0 podman[92198]: 2026-01-27 08:30:51.861095472 +0000 UTC m=+0.173867188 container attach 07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 27 08:30:51 compute-0 podman[92198]: 2026-01-27 08:30:51.881155247 +0000 UTC m=+0.193926963 container died 07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 27 08:30:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 27 08:30:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 27 08:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-21de380376e071bf134db73624615dd6830b9710efde2bdffe13a93fa972ef8f-merged.mount: Deactivated successfully.
Jan 27 08:30:52 compute-0 podman[92017]: 2026-01-27 08:30:52.0120392 +0000 UTC m=+1.037969860 container remove 221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:52 compute-0 ceph-mon[74357]: 2.1c scrub starts
Jan 27 08:30:52 compute-0 ceph-mon[74357]: 2.1c scrub ok
Jan 27 08:30:52 compute-0 ceph-mon[74357]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dkphsh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dkphsh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:30:52 compute-0 ceph-mon[74357]: Deploying daemon rgw.rgw.compute-0.dkphsh on compute-0
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2611789114' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1947759147' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 27 08:30:52 compute-0 ceph-mon[74357]: osdmap e39: 3 total, 3 up, 3 in
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4040394839' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 27 08:30:52 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 27 08:30:52 compute-0 sudo[92014]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:52 compute-0 systemd[1]: libpod-conmon-221c9fc9df8759ed7c9900bc9f15d2b5dc4abfc22bf6e215dc4eb5352a35d370.scope: Deactivated successfully.
Jan 27 08:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d4531a6710985ea2058fd651ee289675585d26a3f88b5671a788dc51e0f6845-merged.mount: Deactivated successfully.
Jan 27 08:30:52 compute-0 podman[92198]: 2026-01-27 08:30:52.076462445 +0000 UTC m=+0.389234181 container remove 07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dhawan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:52 compute-0 systemd[1]: libpod-conmon-07f3972db1265aad2b4ebec34fbab41d7ed6fc9a53747940b4c535055e3e076a.scope: Deactivated successfully.
Jan 27 08:30:52 compute-0 systemd[1]: Reloading.
Jan 27 08:30:52 compute-0 systemd-rc-local-generator[92273]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:30:52 compute-0 systemd-sysv-generator[92277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:30:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v110: 71 pgs: 2 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:52 compute-0 systemd[1]: Reloading.
Jan 27 08:30:52 compute-0 systemd-rc-local-generator[92314]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:30:52 compute-0 systemd-sysv-generator[92319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:30:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 27 08:30:52 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 27 08:30:52 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 27 08:30:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 27 08:30:52 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 27 08:30:52 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 27 08:30:52 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 27 08:30:53 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.dkphsh for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:30:53 compute-0 ceph-mon[74357]: 3.12 scrub starts
Jan 27 08:30:53 compute-0 ceph-mon[74357]: 3.12 scrub ok
Jan 27 08:30:53 compute-0 ceph-mon[74357]: pgmap v110: 71 pgs: 2 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:53 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 27 08:30:53 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 27 08:30:53 compute-0 ceph-mon[74357]: osdmap e40: 3 total, 3 up, 3 in
Jan 27 08:30:53 compute-0 podman[92424]: 2026-01-27 08:30:53.235324525 +0000 UTC m=+0.049185027 container create b3c4049dd030121d6dc57c596b8806d376b6c189003210c0a33a6f8a030e7f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-rgw-rgw-compute-0-dkphsh, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:30:53 compute-0 podman[92424]: 2026-01-27 08:30:53.207852706 +0000 UTC m=+0.021713258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:30:53 compute-0 sudo[92533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpyqthidxrlbfwqjvzwzxzdlxnevhxf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769502653.0949593-37467-52030899653736/async_wrapper.py j429255686784 30 /home/zuul/.ansible/tmp/ansible-tmp-1769502653.0949593-37467-52030899653736/AnsiballZ_command.py _'
Jan 27 08:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7300bc0a0f14a8f61724bcfaeab0fd3eb2cf3745c2cce340eb478d96f2b1394/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7300bc0a0f14a8f61724bcfaeab0fd3eb2cf3745c2cce340eb478d96f2b1394/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7300bc0a0f14a8f61724bcfaeab0fd3eb2cf3745c2cce340eb478d96f2b1394/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7300bc0a0f14a8f61724bcfaeab0fd3eb2cf3745c2cce340eb478d96f2b1394/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dkphsh supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:53 compute-0 sudo[92533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:53 compute-0 podman[92424]: 2026-01-27 08:30:53.423035575 +0000 UTC m=+0.236896097 container init b3c4049dd030121d6dc57c596b8806d376b6c189003210c0a33a6f8a030e7f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-rgw-rgw-compute-0-dkphsh, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:30:53 compute-0 podman[92424]: 2026-01-27 08:30:53.429635628 +0000 UTC m=+0.243496130 container start b3c4049dd030121d6dc57c596b8806d376b6c189003210c0a33a6f8a030e7f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-rgw-rgw-compute-0-dkphsh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:53 compute-0 bash[92424]: b3c4049dd030121d6dc57c596b8806d376b6c189003210c0a33a6f8a030e7f19
Jan 27 08:30:53 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.dkphsh for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:30:53 compute-0 radosgw[92542]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:30:53 compute-0 radosgw[92542]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 27 08:30:53 compute-0 sudo[92115]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:53 compute-0 radosgw[92542]: framework: beast
Jan 27 08:30:53 compute-0 radosgw[92542]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 27 08:30:53 compute-0 radosgw[92542]: init_numa not setting numa affinity
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 20ee4094-cd44-4406-abb2-e5c47356d92c (Updating rgw.rgw deployment (+3 -> 3))
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 20ee4094-cd44-4406-abb2-e5c47356d92c (Updating rgw.rgw deployment (+3 -> 3)) in 7 seconds
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev a8add5ba-243e-4d7a-a79c-8a847ef99565 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.njrjkb on compute-0
Jan 27 08:30:53 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.njrjkb on compute-0
Jan 27 08:30:53 compute-0 ansible-async_wrapper.py[92540]: Invoked with j429255686784 30 /home/zuul/.ansible/tmp/ansible-tmp-1769502653.0949593-37467-52030899653736/AnsiballZ_command.py _
Jan 27 08:30:53 compute-0 ansible-async_wrapper.py[92612]: Starting module and watcher
Jan 27 08:30:53 compute-0 ansible-async_wrapper.py[92612]: Start watching 92614 (30)
Jan 27 08:30:53 compute-0 ansible-async_wrapper.py[92614]: Start module (92614)
Jan 27 08:30:53 compute-0 ansible-async_wrapper.py[92540]: Return async_wrapper task started.
Jan 27 08:30:53 compute-0 sudo[92533]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:53 compute-0 sudo[92604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:53 compute-0 sudo[92604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:53 compute-0 sudo[92604]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:53 compute-0 sudo[92634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:30:53 compute-0 sudo[92634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:53 compute-0 sudo[92634]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:53 compute-0 python3[92620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:53 compute-0 sudo[92659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:30:53 compute-0 sudo[92659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:53 compute-0 sudo[92659]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:53 compute-0 podman[92682]: 2026-01-27 08:30:53.805030427 +0000 UTC m=+0.040249625 container create c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7 (image=quay.io/ceph/ceph:v18, name=festive_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:53 compute-0 sudo[92690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:30:53 compute-0 sudo[92690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:30:53 compute-0 systemd[1]: Started libpod-conmon-c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7.scope.
Jan 27 08:30:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d97790019ab9045e90a9fdd4a3d1b9357cd1c5e2c2716ff14bef71935f24e4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d97790019ab9045e90a9fdd4a3d1b9357cd1c5e2c2716ff14bef71935f24e4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:53 compute-0 podman[92682]: 2026-01-27 08:30:53.881358453 +0000 UTC m=+0.116577681 container init c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7 (image=quay.io/ceph/ceph:v18, name=festive_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:53 compute-0 podman[92682]: 2026-01-27 08:30:53.787232711 +0000 UTC m=+0.022451929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 27 08:30:53 compute-0 podman[92682]: 2026-01-27 08:30:53.891979741 +0000 UTC m=+0.127198929 container start c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7 (image=quay.io/ceph/ceph:v18, name=festive_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 27 08:30:53 compute-0 podman[92682]: 2026-01-27 08:30:53.90609253 +0000 UTC m=+0.141311728 container attach c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7 (image=quay.io/ceph/ceph:v18, name=festive_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4164757714' entity='client.rgw.rgw.compute-0.dkphsh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 27 08:30:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 41 pg[10.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:30:54 compute-0 ceph-mon[74357]: 3.17 scrub starts
Jan 27 08:30:54 compute-0 ceph-mon[74357]: 3.17 scrub ok
Jan 27 08:30:54 compute-0 ceph-mon[74357]: 3.3 scrub starts
Jan 27 08:30:54 compute-0 ceph-mon[74357]: 3.3 scrub ok
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mon[74357]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mon[74357]: Deploying daemon haproxy.rgw.default.compute-0.njrjkb on compute-0
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1947759147' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:54 compute-0 ceph-mon[74357]: osdmap e41: 3 total, 3 up, 3 in
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4164757714' entity='client.rgw.rgw.compute-0.dkphsh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4040394839' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:54 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 27 08:30:54 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:30:54 compute-0 festive_brown[92724]: 
Jan 27 08:30:54 compute-0 festive_brown[92724]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:30:54 compute-0 systemd[1]: libpod-c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7.scope: Deactivated successfully.
Jan 27 08:30:54 compute-0 podman[92682]: 2026-01-27 08:30:54.453018115 +0000 UTC m=+0.688237373 container died c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7 (image=quay.io/ceph/ceph:v18, name=festive_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 27 08:30:54 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 9 completed events
Jan 27 08:30:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d97790019ab9045e90a9fdd4a3d1b9357cd1c5e2c2716ff14bef71935f24e4c-merged.mount: Deactivated successfully.
Jan 27 08:30:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:54 compute-0 ceph-mgr[74650]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Jan 27 08:30:54 compute-0 podman[92682]: 2026-01-27 08:30:54.497679973 +0000 UTC m=+0.732899171 container remove c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7 (image=quay.io/ceph/ceph:v18, name=festive_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:54 compute-0 ansible-async_wrapper.py[92614]: Module complete (92614)
Jan 27 08:30:54 compute-0 systemd[1]: libpod-conmon-c56acf9cf9fe4948b7e3b77a13578c8a2e32f06eb241e6b5f752b9f8824fd0d7.scope: Deactivated successfully.
Jan 27 08:30:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v113: 72 pgs: 3 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:54 compute-0 sudo[92859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixknqgkdebzkxebvmglylwhwhgxkrgip ; /usr/bin/python3'
Jan 27 08:30:54 compute-0 sudo[92859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 27 08:30:54 compute-0 python3[92861]: ansible-ansible.legacy.async_status Invoked with jid=j429255686784.92540 mode=status _async_dir=/root/.ansible_async
Jan 27 08:30:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4164757714' entity='client.rgw.rgw.compute-0.dkphsh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:30:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:30:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:30:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 27 08:30:54 compute-0 sudo[92859]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:54 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 27 08:30:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 42 pg[10.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:30:55 compute-0 sudo[92932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzpxgdlobykkcapslvkothwheenxanj ; /usr/bin/python3'
Jan 27 08:30:55 compute-0 sudo[92932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:55 compute-0 python3[92945]: ansible-ansible.legacy.async_status Invoked with jid=j429255686784.92540 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 08:30:55 compute-0 sudo[92932]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:55 compute-0 ceph-mon[74357]: from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:30:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:55 compute-0 ceph-mon[74357]: pgmap v113: 72 pgs: 3 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:30:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4164757714' entity='client.rgw.rgw.compute-0.dkphsh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:30:55 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:30:55 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:30:55 compute-0 ceph-mon[74357]: osdmap e42: 3 total, 3 up, 3 in
Jan 27 08:30:55 compute-0 sudo[92989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxsdwnsgrmlazdgnnnfpyjwpvwtauqr ; /usr/bin/python3'
Jan 27 08:30:55 compute-0 sudo[92989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:55 compute-0 python3[92991]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 27 08:30:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 27 08:30:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 27 08:30:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 27 08:30:55 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 27 08:30:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 27 08:30:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 27 08:30:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 27 08:30:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:56 compute-0 podman[92992]: 2026-01-27 08:30:56.454178456 +0000 UTC m=+0.603799973 container create ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1 (image=quay.io/ceph/ceph:v18, name=happy_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:30:56 compute-0 podman[92992]: 2026-01-27 08:30:56.432621773 +0000 UTC m=+0.582243300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.490198248 +0000 UTC m=+2.414124254 container create 8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5 (image=quay.io/ceph/haproxy:2.3, name=focused_hoover)
Jan 27 08:30:56 compute-0 systemd[1]: Started libpod-conmon-ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1.scope.
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.47534204 +0000 UTC m=+2.399268066 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 27 08:30:56 compute-0 systemd[1]: Started libpod-conmon-8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5.scope.
Jan 27 08:30:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1cb523b4ea15e2be09a7a84319cfe50e90de01867a2a8c42da07f8b94bd93e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1cb523b4ea15e2be09a7a84319cfe50e90de01867a2a8c42da07f8b94bd93e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.548769681 +0000 UTC m=+2.472695717 container init 8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5 (image=quay.io/ceph/haproxy:2.3, name=focused_hoover)
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.556585725 +0000 UTC m=+2.480511741 container start 8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5 (image=quay.io/ceph/haproxy:2.3, name=focused_hoover)
Jan 27 08:30:56 compute-0 podman[92992]: 2026-01-27 08:30:56.559507982 +0000 UTC m=+0.709129539 container init ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1 (image=quay.io/ceph/ceph:v18, name=happy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:30:56 compute-0 systemd[1]: libpod-8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5.scope: Deactivated successfully.
Jan 27 08:30:56 compute-0 focused_hoover[93073]: 0 0
Jan 27 08:30:56 compute-0 conmon[93073]: conmon 8881d87d75f79a41ce41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5.scope/container/memory.events
Jan 27 08:30:56 compute-0 podman[92992]: 2026-01-27 08:30:56.567180263 +0000 UTC m=+0.716801790 container start ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1 (image=quay.io/ceph/ceph:v18, name=happy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.567659475 +0000 UTC m=+2.491585491 container attach 8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5 (image=quay.io/ceph/haproxy:2.3, name=focused_hoover)
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.568171358 +0000 UTC m=+2.492097384 container died 8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5 (image=quay.io/ceph/haproxy:2.3, name=focused_hoover)
Jan 27 08:30:56 compute-0 podman[92992]: 2026-01-27 08:30:56.597840074 +0000 UTC m=+0.747461631 container attach ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1 (image=quay.io/ceph/ceph:v18, name=happy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c220c22ec029c497d0576b25bdc57958f68851157b2a8c744ba5709840296b8-merged.mount: Deactivated successfully.
Jan 27 08:30:56 compute-0 podman[92767]: 2026-01-27 08:30:56.629721228 +0000 UTC m=+2.553647234 container remove 8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5 (image=quay.io/ceph/haproxy:2.3, name=focused_hoover)
Jan 27 08:30:56 compute-0 systemd[1]: libpod-conmon-8881d87d75f79a41ce41c466205334d61da104dd00779c01aa0a4f15bca121d5.scope: Deactivated successfully.
Jan 27 08:30:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v116: 73 pgs: 1 unknown, 72 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 27 08:30:56 compute-0 systemd[1]: Reloading.
Jan 27 08:30:56 compute-0 systemd-rc-local-generator[93125]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:30:56 compute-0 systemd-sysv-generator[93129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:30:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 27 08:30:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:30:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:30:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:30:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 27 08:30:56 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 27 08:30:56 compute-0 systemd[1]: Reloading.
Jan 27 08:30:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 27 08:30:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 27 08:30:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:57 compute-0 ceph-mon[74357]: 3.a scrub starts
Jan 27 08:30:57 compute-0 ceph-mon[74357]: 3.a scrub ok
Jan 27 08:30:57 compute-0 ceph-mon[74357]: osdmap e43: 3 total, 3 up, 3 in
Jan 27 08:30:57 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:57 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4070382142' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:57 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4258684616' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:57 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:57 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 27 08:30:57 compute-0 ceph-mon[74357]: pgmap v116: 73 pgs: 1 unknown, 72 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 27 08:30:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:57 compute-0 systemd-sysv-generator[93186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:30:57 compute-0 systemd-rc-local-generator[93181]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:30:57 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:30:57 compute-0 happy_yalow[93069]: 
Jan 27 08:30:57 compute-0 happy_yalow[93069]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:30:57 compute-0 podman[92992]: 2026-01-27 08:30:57.167914155 +0000 UTC m=+1.317535682 container died ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1 (image=quay.io/ceph/ceph:v18, name=happy_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:57 compute-0 systemd[1]: libpod-ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1.scope: Deactivated successfully.
Jan 27 08:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1cb523b4ea15e2be09a7a84319cfe50e90de01867a2a8c42da07f8b94bd93e8-merged.mount: Deactivated successfully.
Jan 27 08:30:57 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.njrjkb for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:30:57 compute-0 podman[92992]: 2026-01-27 08:30:57.293266783 +0000 UTC m=+1.442888310 container remove ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1 (image=quay.io/ceph/ceph:v18, name=happy_yalow, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:30:57 compute-0 systemd[1]: libpod-conmon-ae5d376bd64903ecb547d678125c156462f618480e44bd2d06fa5751a298eec1.scope: Deactivated successfully.
Jan 27 08:30:57 compute-0 sudo[92989]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:57 compute-0 podman[93256]: 2026-01-27 08:30:57.484448274 +0000 UTC m=+0.042742249 container create 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00e4893c4ebefdb0f9519a4768377f2d93f522963ec03fe1289af774902fbc8/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:57 compute-0 podman[93256]: 2026-01-27 08:30:57.543794476 +0000 UTC m=+0.102088481 container init 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:30:57 compute-0 podman[93256]: 2026-01-27 08:30:57.548395306 +0000 UTC m=+0.106689281 container start 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:30:57 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb[93271]: [NOTICE] 026/083057 (2) : New worker #1 (4) forked
Jan 27 08:30:57 compute-0 bash[93256]: 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76
Jan 27 08:30:57 compute-0 podman[93256]: 2026-01-27 08:30:57.467732206 +0000 UTC m=+0.026026201 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 27 08:30:57 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb[93271]: [WARNING] 026/083057 (4) : Server backend/rgw.rgw.compute-0.dkphsh is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 27 08:30:57 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.njrjkb for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:30:57 compute-0 sudo[92690]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:57 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.zetuol on compute-2
Jan 27 08:30:57 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.zetuol on compute-2
Jan 27 08:30:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:30:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:30:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 27 08:30:58 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 27 08:30:58 compute-0 ceph-mon[74357]: 3.18 deep-scrub starts
Jan 27 08:30:58 compute-0 ceph-mon[74357]: 3.18 deep-scrub ok
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:30:58 compute-0 ceph-mon[74357]: osdmap e44: 3 total, 3 up, 3 in
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4258684616' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4070382142' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:30:58 compute-0 ceph-mon[74357]: Deploying daemon haproxy.rgw.default.compute-2.zetuol on compute-2
Jan 27 08:30:58 compute-0 ceph-mon[74357]: 3.11 scrub starts
Jan 27 08:30:58 compute-0 ceph-mon[74357]: 3.11 scrub ok
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-1.nigpsg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1873132714' entity='client.rgw.rgw.compute-0.dkphsh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:30:58 compute-0 ceph-mon[74357]: from='client.? ' entity='client.rgw.rgw.compute-2.igzbmp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:30:58 compute-0 ceph-mon[74357]: osdmap e45: 3 total, 3 up, 3 in
Jan 27 08:30:58 compute-0 sudo[93308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmixaaxcwfumjgcyzjdrbjskmdmrsms ; /usr/bin/python3'
Jan 27 08:30:58 compute-0 sudo[93308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:30:58 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb[93271]: [WARNING] 026/083058 (4) : Server backend/rgw.rgw.compute-1.nigpsg is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 27 08:30:58 compute-0 radosgw[92542]: LDAP not started since no server URIs were provided in the configuration.
Jan 27 08:30:58 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-rgw-rgw-compute-0-dkphsh[92536]: 2026-01-27T08:30:58.246+0000 7f86030f3940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 27 08:30:58 compute-0 radosgw[92542]: framework: beast
Jan 27 08:30:58 compute-0 radosgw[92542]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 27 08:30:58 compute-0 radosgw[92542]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 27 08:30:58 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 27 08:30:58 compute-0 python3[93310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:30:58 compute-0 radosgw[92542]: starting handler: beast
Jan 27 08:30:58 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 27 08:30:58 compute-0 radosgw[92542]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:30:58 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 27 08:30:58 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 27 08:30:58 compute-0 radosgw[92542]: mgrc service_daemon_register rgw.14367 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dkphsh,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=aa2aa4a4-c3aa-4720-90cc-3bf921d38687,zone_name=default,zonegroup_id=0e27ee1f-f34c-47a4-a456-d26067f089ca,zonegroup_name=default}
Jan 27 08:30:58 compute-0 podman[93338]: 2026-01-27 08:30:58.340092183 +0000 UTC m=+0.071001008 container create 710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8 (image=quay.io/ceph/ceph:v18, name=wizardly_keldysh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:30:58 compute-0 podman[93338]: 2026-01-27 08:30:58.291448731 +0000 UTC m=+0.022357576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:30:58 compute-0 systemd[1]: Started libpod-conmon-710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8.scope.
Jan 27 08:30:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85578baaa8e34a85d7e12b146775adf99fb97aaadefa693e1b22e8d3bd82a1ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85578baaa8e34a85d7e12b146775adf99fb97aaadefa693e1b22e8d3bd82a1ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:58 compute-0 podman[93338]: 2026-01-27 08:30:58.565111189 +0000 UTC m=+0.296020034 container init 710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8 (image=quay.io/ceph/ceph:v18, name=wizardly_keldysh, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:30:58 compute-0 podman[93338]: 2026-01-27 08:30:58.570383287 +0000 UTC m=+0.301292112 container start 710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8 (image=quay.io/ceph/ceph:v18, name=wizardly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:30:58 compute-0 podman[93338]: 2026-01-27 08:30:58.573102378 +0000 UTC m=+0.304011203 container attach 710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8 (image=quay.io/ceph/ceph:v18, name=wizardly_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:30:58 compute-0 ansible-async_wrapper.py[92612]: Done in kid B.
Jan 27 08:30:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v119: 73 pgs: 1 unknown, 72 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 27 08:30:59 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 27 08:30:59 compute-0 ceph-mon[74357]: pgmap v119: 73 pgs: 1 unknown, 72 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 27 08:30:59 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:30:59 compute-0 wizardly_keldysh[93870]: 
Jan 27 08:30:59 compute-0 wizardly_keldysh[93870]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 27 08:30:59 compute-0 systemd[1]: libpod-710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8.scope: Deactivated successfully.
Jan 27 08:30:59 compute-0 podman[93338]: 2026-01-27 08:30:59.12602929 +0000 UTC m=+0.856938115 container died 710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8 (image=quay.io/ceph/ceph:v18, name=wizardly_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-85578baaa8e34a85d7e12b146775adf99fb97aaadefa693e1b22e8d3bd82a1ac-merged.mount: Deactivated successfully.
Jan 27 08:30:59 compute-0 podman[93338]: 2026-01-27 08:30:59.171312854 +0000 UTC m=+0.902221679 container remove 710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8 (image=quay.io/ceph/ceph:v18, name=wizardly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:59 compute-0 systemd[1]: libpod-conmon-710c72e8368ac202189c5e7efdc85ffcf145ca796b89298396365684778408b8.scope: Deactivated successfully.
Jan 27 08:30:59 compute-0 sudo[93308]: pam_unix(sudo:session): session closed for user root
Jan 27 08:30:59 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event d20735cd-5473-4e52-9eb0-0c96103cab5a (Global Recovery Event) in 5 seconds
Jan 27 08:30:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:30:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 27 08:30:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:30:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 27 08:30:59 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 27 08:31:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 27 08:31:00 compute-0 ceph-mon[74357]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 27 08:31:00 compute-0 ceph-mon[74357]: from='client.14379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:31:00 compute-0 sudo[93930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebyzqzqgccbyhstbhizywfmvbpvirsbj ; /usr/bin/python3'
Jan 27 08:31:00 compute-0 sudo[93930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:00 compute-0 python3[93932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:00 compute-0 podman[93933]: 2026-01-27 08:31:00.304621777 +0000 UTC m=+0.060880354 container create 8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba (image=quay.io/ceph/ceph:v18, name=recursing_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:00 compute-0 systemd[1]: Started libpod-conmon-8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba.scope.
Jan 27 08:31:00 compute-0 podman[93933]: 2026-01-27 08:31:00.268797049 +0000 UTC m=+0.025055636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70db425158e961f1ffa46c7d2798dc6efb5cd38629b2ae68d3027324811159f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70db425158e961f1ffa46c7d2798dc6efb5cd38629b2ae68d3027324811159f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:00 compute-0 podman[93933]: 2026-01-27 08:31:00.414707246 +0000 UTC m=+0.170965823 container init 8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba (image=quay.io/ceph/ceph:v18, name=recursing_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:00 compute-0 podman[93933]: 2026-01-27 08:31:00.425773245 +0000 UTC m=+0.182031802 container start 8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba (image=quay.io/ceph/ceph:v18, name=recursing_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:31:00 compute-0 podman[93933]: 2026-01-27 08:31:00.452903725 +0000 UTC m=+0.209162272 container attach 8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba (image=quay.io/ceph/ceph:v18, name=recursing_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v120: 73 pgs: 73 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 202 KiB/s rd, 5.9 KiB/s wr, 376 op/s
Jan 27 08:31:00 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:31:00 compute-0 recursing_lumiere[93948]: 
Jan 27 08:31:00 compute-0 recursing_lumiere[93948]: [{"container_id": "7962a418399e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.79%", "created": "2026-01-27T08:28:41.011055Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-27T08:28:41.060375Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T08:29:32.754536Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2026-01-27T08:28:40.895236Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@crash.compute-0", "version": "18.2.7"}, {"container_id": "8039fbf5b150", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.84%", "created": "2026-01-27T08:29:14.957824Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-27T08:29:15.051047Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-27T08:30:30.992986Z", "memory_usage": 11901337, "ports": [], "service_name": "crash", "started": "2026-01-27T08:29:14.860647Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@crash.compute-1", "version": "18.2.7"}, {"container_id": "dae3021695d1", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.57%", "created": "2026-01-27T08:30:13.133344Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-27T08:30:13.205592Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-27T08:30:32.060202Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2026-01-27T08:30:12.988252Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@crash.compute-2", "version": "18.2.7"}, {"daemon_id": "rgw.default.compute-0.njrjkb", "daemon_name": "haproxy.rgw.default.compute-0.njrjkb", "daemon_type": "haproxy", "events": ["2026-01-27T08:30:57.651183Z daemon:haproxy.rgw.default.compute-0.njrjkb [INFO] \"Deployed haproxy.rgw.default.compute-0.njrjkb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [8080, 8999], "service_name": "ingress.rgw.default", "status": 2, "status_desc": "starting"}, {"container_id": "3429ee293a25", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "42.21%", "created": "2026-01-27T08:27:29.897153Z", "daemon_id": "compute-0.vujqxq", "daemon_name": "mgr.compute-0.vujqxq", "daemon_type": "mgr", "events": ["2026-01-27T08:28:43.731979Z daemon:mgr.compute-0.vujqxq [INFO] \"Reconfigured mgr.compute-0.vujqxq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T08:29:32.754478Z", "memory_usage": 548090675, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-27T08:27:29.614042Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mgr.compute-0.vujqxq", "version": "18.2.7"}, {"container_id": "2e8b86fae084", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "95.85%", "created": "2026-01-27T08:30:09.695207Z", "daemon_id": "compute-1.jqbgxp", "daemon_name": "mgr.compute-1.jqbgxp", "daemon_type": "mgr", "events": ["2026-01-27T08:30:09.894911Z daemon:mgr.compute-1.jqbgxp [INFO] \"Deployed mgr.compute-1.jqbgxp on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-27T08:30:30.993405Z", "memory_usage": 513802240, "ports": [8765], "service_name": "mgr", "started": "2026-01-27T08:30:09.608369Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mgr.compute-1.jqbgxp", "version": "18.2.7"}, {"container_id": "e2018cbeed59", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "78.08%", "created": "2026-01-27T08:30:03.920088Z", "daemon_id": "compute-2.cbywrc", "daemon_name": "mgr.compute-2.cbywrc", "daemon_type": "mgr", "events": ["2026-01-27T08:30:08.144653Z daemon:mgr.compute-2.cbywrc [INFO] \"Deployed mgr.compute-2.cbywrc on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-27T08:30:32.060120Z", "memory_usage": 512229376, "ports": [8765], "service_name": "mgr", "started": "2026-01-27T08:30:03.755617Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mgr.compute-2.cbywrc", "version": "18.2.7"}, {"container_id": "b81872c9cb50", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.45%", "created": "2026-01-27T08:27:22.602951Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-27T08:28:43.102827Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T08:29:32.754406Z", "memory_request": 2147483648, "memory_usage": 31971082, "ports": [], "service_name": "mon", "started": "2026-01-27T08:27:26.164603Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mon.compute-0", "version": "18.2.7"}, {"container_id": "251b1a4718b2", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.26%", "created": "2026-01-27T08:29:58.968933Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-27T08:30:02.291852Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-27T08:30:30.993282Z", "memory_request": 2147483648, "memory_usage": 32233226, "ports": [], "service_name": "mon", "started": "2026-01-27T08:29:58.877426Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mon.compute-1", "version": "18.2.7"}, {"container_id": "1da77a7c0a90", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.57%", "created": "2026-01-27T08:29:57.120898Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2026-01-27T08:29:57.168376Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-27T08:30:32.060012Z", "memory_request": 2147483648, "memory_usage": 31541166, "ports": [], "service_name": "mon", "started": "2026-01-27T08:29:57.017077Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@mon.compute-2", "version": "18.2.7"}, {"container_id": "46a0a8c9f96b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "8.30%", "created": "2026-01-27T08:29:29.328351Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-27T08:29:29.378541Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T08:29:32.754591Z", "memory_request": 4294967296, "memory_usage": 34644951, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-27T08:29:29.228735Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@osd.0", "version": "18.2.7"}, {"container_id": "6c1249c6f24e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.53%", "created": "2026-01-27T08:29:26.606115Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-27T08:29:26.716056Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-27T08:30:30.993153Z", "memory_request": 5502918246, "memory_usage": 61006151, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-27T08:29:26.506264Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@osd.1", "version": "18.2.7"}, {"container_id": "fdbe07ce7f3d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "6.18%", "created": "2026-01-27T08:30:27.498744Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-27T08:30:27.649122Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-27T08:30:32.060276Z", "memory_request": 4294967296, "memory_usage": 33019658, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-27T08:30:27.273435Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.dkphsh", "daemon_name": "rgw.rgw.compute-0.dkphsh", "daemon_type": "rgw", "events": ["2026-01-27T08:30:53.519981Z daemon:rgw.rgw.compute-0.dkphsh [INFO] \"Deployed rgw.rgw.compute-0.dkphsh on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-1.nigpsg", "daemon_name": "rgw.rgw.compute-1.nigpsg", "daemon_type": "rgw", "events": ["2026-01-27T08:30:50.817013Z daemon:rgw.rgw.compute-1.nigpsg [INFO] \"Deployed rgw.rgw.compute-1.nigpsg on host 'compute-1'\""], "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-2.igzbmp", "daemon_name": "rgw.rgw.compute-2.igzbmp", "daemon_type": "rgw", "events": ["2026-01-27T08:30:48.676574Z daemon:rgw.rgw.compute-2.igzbmp [INFO] \"Deployed rgw.rgw.compute-2.igzbmp on host 'compute-2'\""], "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 27 08:31:00 compute-0 systemd[1]: libpod-8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba.scope: Deactivated successfully.
Jan 27 08:31:00 compute-0 podman[93933]: 2026-01-27 08:31:00.996862202 +0000 UTC m=+0.753120759 container died 8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba (image=quay.io/ceph/ceph:v18, name=recursing_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:01 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 27 08:31:01 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 27 08:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-70db425158e961f1ffa46c7d2798dc6efb5cd38629b2ae68d3027324811159f7-merged.mount: Deactivated successfully.
Jan 27 08:31:01 compute-0 podman[93933]: 2026-01-27 08:31:01.064275925 +0000 UTC m=+0.820534472 container remove 8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba (image=quay.io/ceph/ceph:v18, name=recursing_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:31:01 compute-0 systemd[1]: libpod-conmon-8a8f2d7c6a73f90553edb821cd668ef3108f08c3a3b8fe0746867c3e5ffa3eba.scope: Deactivated successfully.
Jan 27 08:31:01 compute-0 sudo[93930]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:01 compute-0 ceph-mon[74357]: 3.d scrub starts
Jan 27 08:31:01 compute-0 ceph-mon[74357]: 3.d scrub ok
Jan 27 08:31:01 compute-0 ceph-mon[74357]: pgmap v120: 73 pgs: 73 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 202 KiB/s rd, 5.9 KiB/s wr, 376 op/s
Jan 27 08:31:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:01 compute-0 rsyslogd[1007]: message too long (14438) with configured size 8096, begin of message is: [{"container_id": "7962a418399e", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 08:31:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:01.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:01 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb[93271]: [WARNING] 026/083101 (4) : Server backend/rgw.rgw.compute-0.dkphsh is UP, reason: Layer7 check passed, code: 200, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 27 08:31:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:01.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 27 08:31:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 27 08:31:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:01 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 27 08:31:01 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 27 08:31:01 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 27 08:31:01 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 27 08:31:01 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.knqeph on compute-0
Jan 27 08:31:01 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.knqeph on compute-0
Jan 27 08:31:01 compute-0 sudo[93986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:01 compute-0 sudo[93986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:01 compute-0 sudo[93986]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:01 compute-0 sudo[94047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoblevgzdzofkiehxahdkkhdwjgfvutr ; /usr/bin/python3'
Jan 27 08:31:01 compute-0 sudo[94047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:01 compute-0 sudo[94024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:01 compute-0 sudo[94024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:01 compute-0 sudo[94024]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:01 compute-0 sudo[94062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:01 compute-0 sudo[94062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:01 compute-0 sudo[94062]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:02 compute-0 sudo[94087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:31:02 compute-0 sudo[94087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:02 compute-0 python3[94059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.098026404 +0000 UTC m=+0.051991131 container create 88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:02 compute-0 ceph-mon[74357]: 3.19 scrub starts
Jan 27 08:31:02 compute-0 ceph-mon[74357]: 3.19 scrub ok
Jan 27 08:31:02 compute-0 ceph-mon[74357]: from='client.14385 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:31:02 compute-0 ceph-mon[74357]: 3.f scrub starts
Jan 27 08:31:02 compute-0 ceph-mon[74357]: 3.f scrub ok
Jan 27 08:31:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:02 compute-0 ceph-mon[74357]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 27 08:31:02 compute-0 ceph-mon[74357]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 27 08:31:02 compute-0 ceph-mon[74357]: Deploying daemon keepalived.rgw.default.compute-0.knqeph on compute-0
Jan 27 08:31:02 compute-0 systemd[1]: Started libpod-conmon-88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9.scope.
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.072435625 +0000 UTC m=+0.026400372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a6ed56f64790fe42688517a84a7f2508991afb701ffe37e9248ddbac612263/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a6ed56f64790fe42688517a84a7f2508991afb701ffe37e9248ddbac612263/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.190628986 +0000 UTC m=+0.144593723 container init 88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.197904467 +0000 UTC m=+0.151869184 container start 88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9 (image=quay.io/ceph/ceph:v18, name=distracted_panini, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.201777538 +0000 UTC m=+0.155742285 container attach 88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:02 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb[93271]: [WARNING] 026/083102 (4) : Server backend/rgw.rgw.compute-1.nigpsg is UP, reason: Layer7 check passed, code: 200, check duration: 2ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 27 08:31:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v121: 73 pgs: 73 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 173 KiB/s rd, 5.1 KiB/s wr, 322 op/s
Jan 27 08:31:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 27 08:31:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/919023144' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:31:02 compute-0 distracted_panini[94136]: 
Jan 27 08:31:02 compute-0 distracted_panini[94136]: {"fsid":"281e9bde-2795-59f4-98ac-90cf5b49a2de","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":54,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1769502640,"num_in_osds":3,"osd_in_since":1769502615,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":73}],"num_pgs":73,"num_pools":11,"num_objects":193,"data_bytes":465662,"bytes_used":84443136,"bytes_avail":22451552256,"bytes_total":22535995392,"read_bytes_sec":206866,"write_bytes_sec":6063,"read_op_per_sec":244,"write_op_per_sec":132},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-27T08:30:58.675602+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.jqbgxp":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.cbywrc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14367":{"start_epoch":4,"start_stamp":"2026-01-27T08:30:58.314858+0000","gid":14367,"addr":"192.168.122.100:0/1873132714","metadata":{"arch":"x86_64","ceph_release":"reef","ceph_version":"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)","ceph_version_short":"18.2.7","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.dkphsh","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864316","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"aa2aa4a4-c3aa-4720-90cc-3bf921d38687","zone_name":"default","zonegroup_id":"0e27ee1f-f34c-47a4-a456-d26067f089ca","zonegroup_name":"default"},"task_status":{}},"24139":{"start_epoch":4,"start_stamp":"2026-01-27T08:30:58.330394+0000","gid":24139,"addr":"192.168.122.102:0/4070382142","metadata":{"arch":"x86_64","ceph_release":"reef","ceph_version":"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)","ceph_version_short":"18.2.7","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.igzbmp","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"aa2aa4a4-c3aa-4720-90cc-3bf921d38687","zone_name":"default","zonegroup_id":"0e27ee1f-f34c-47a4-a456-d26067f089ca","zonegroup_name":"default"},"task_status":{}},"24140":{"start_epoch":4,"start_stamp":"2026-01-27T08:30:58.307371+0000","gid":24140,"addr":"192.168.122.101:0/4258684616","metadata":{"arch":"x86_64","ceph_release":"reef","ceph_version":"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)","ceph_version_short":"18.2.7","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.nigpsg","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"aa2aa4a4-c3aa-4720-90cc-3bf921d38687","zone_name":"default","zonegroup_id":"0e27ee1f-f34c-47a4-a456-d26067f089ca","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"a8add5ba-243e-4d7a-a79c-8a847ef99565":{"message":"Updating ingress.rgw.default deployment (+4 -> 4) (4s)\n      [=======.....................] (remaining: 12s)","progress":0.25,"add_to_ceph_s":true}}}
Jan 27 08:31:02 compute-0 systemd[1]: libpod-88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9.scope: Deactivated successfully.
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.825412689 +0000 UTC m=+0.779377406 container died 88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9 (image=quay.io/ceph/ceph:v18, name=distracted_panini, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a6ed56f64790fe42688517a84a7f2508991afb701ffe37e9248ddbac612263-merged.mount: Deactivated successfully.
Jan 27 08:31:02 compute-0 podman[94112]: 2026-01-27 08:31:02.876187237 +0000 UTC m=+0.830151954 container remove 88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9 (image=quay.io/ceph/ceph:v18, name=distracted_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:31:02 compute-0 systemd[1]: libpod-conmon-88434b701c1c9460f7ba4e1d6b4f69d126abd8db092c16f996319688e9b37ae9.scope: Deactivated successfully.
Jan 27 08:31:02 compute-0 sudo[94047]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:03 compute-0 ceph-mon[74357]: pgmap v121: 73 pgs: 73 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 173 KiB/s rd, 5.1 KiB/s wr, 322 op/s
Jan 27 08:31:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/919023144' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 27 08:31:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:03.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:03.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:03 compute-0 sudo[94278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczprpochzioynewipzkqgtacvfkylog ; /usr/bin/python3'
Jan 27 08:31:03 compute-0 sudo[94278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:03 compute-0 python3[94280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:04 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 10 completed events
Jan 27 08:31:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:31:04 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v122: 73 pgs: 73 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 145 KiB/s rd, 4.2 KiB/s wr, 270 op/s
Jan 27 08:31:05 compute-0 podman[94281]: 2026-01-27 08:31:05.442936752 +0000 UTC m=+1.573034555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:05.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:05 compute-0 podman[94281]: 2026-01-27 08:31:05.620195369 +0000 UTC m=+1.750293152 container create 73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892 (image=quay.io/ceph/ceph:v18, name=funny_greider, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:31:05 compute-0 ceph-mon[74357]: 3.1e scrub starts
Jan 27 08:31:05 compute-0 ceph-mon[74357]: 3.1e scrub ok
Jan 27 08:31:05 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:05 compute-0 ceph-mon[74357]: pgmap v122: 73 pgs: 73 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 145 KiB/s rd, 4.2 KiB/s wr, 270 op/s
Jan 27 08:31:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:05 compute-0 systemd[1]: Started libpod-conmon-73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892.scope.
Jan 27 08:31:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49129b126d21d4b272b7b6796e4e1b2a344881857122086a0a520243f01ab24c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49129b126d21d4b272b7b6796e4e1b2a344881857122086a0a520243f01ab24c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:05 compute-0 podman[94281]: 2026-01-27 08:31:05.737387614 +0000 UTC m=+1.867485487 container init 73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892 (image=quay.io/ceph/ceph:v18, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:31:05 compute-0 podman[94281]: 2026-01-27 08:31:05.743737089 +0000 UTC m=+1.873834862 container start 73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892 (image=quay.io/ceph/ceph:v18, name=funny_greider, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:05 compute-0 podman[94174]: 2026-01-27 08:31:05.74832134 +0000 UTC m=+3.426131064 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 27 08:31:05 compute-0 podman[94281]: 2026-01-27 08:31:05.764123383 +0000 UTC m=+1.894221196 container attach 73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892 (image=quay.io/ceph/ceph:v18, name=funny_greider, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:31:05 compute-0 podman[94174]: 2026-01-27 08:31:05.799749054 +0000 UTC m=+3.477558748 container create 38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3 (image=quay.io/ceph/keepalived:2.2.4, name=elated_lovelace, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 08:31:05 compute-0 systemd[1]: Started libpod-conmon-38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3.scope.
Jan 27 08:31:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:05 compute-0 podman[94174]: 2026-01-27 08:31:05.871397979 +0000 UTC m=+3.549207773 container init 38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3 (image=quay.io/ceph/keepalived:2.2.4, name=elated_lovelace, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, release=1793)
Jan 27 08:31:05 compute-0 podman[94174]: 2026-01-27 08:31:05.882380616 +0000 UTC m=+3.560190350 container start 38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3 (image=quay.io/ceph/keepalived:2.2.4, name=elated_lovelace, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 27 08:31:05 compute-0 elated_lovelace[94346]: 0 0
Jan 27 08:31:05 compute-0 systemd[1]: libpod-38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3.scope: Deactivated successfully.
Jan 27 08:31:05 compute-0 conmon[94346]: conmon 38d5ea8c2d9ad52aeae9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3.scope/container/memory.events
Jan 27 08:31:05 compute-0 podman[94174]: 2026-01-27 08:31:05.892950243 +0000 UTC m=+3.570759947 container attach 38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3 (image=quay.io/ceph/keepalived:2.2.4, name=elated_lovelace, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1793, vcs-type=git)
Jan 27 08:31:05 compute-0 podman[94174]: 2026-01-27 08:31:05.893250531 +0000 UTC m=+3.571060235 container died 38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3 (image=quay.io/ceph/keepalived:2.2.4, name=elated_lovelace, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Jan 27 08:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-925b9828fd31b232c3479428d9a9c3bf6d4a093baa011ab53a9ec59e823c41a3-merged.mount: Deactivated successfully.
Jan 27 08:31:06 compute-0 podman[94174]: 2026-01-27 08:31:06.099081494 +0000 UTC m=+3.776891208 container remove 38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3 (image=quay.io/ceph/keepalived:2.2.4, name=elated_lovelace, name=keepalived, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20)
Jan 27 08:31:06 compute-0 systemd[1]: libpod-conmon-38d5ea8c2d9ad52aeae9adab22f5c0d45a8ce76e1bb1f790d6e7ec3b97555bc3.scope: Deactivated successfully.
Jan 27 08:31:06 compute-0 systemd[1]: Reloading.
Jan 27 08:31:06 compute-0 systemd-rc-local-generator[94413]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:31:06 compute-0 systemd-sysv-generator[94416]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:31:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 27 08:31:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377515265' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:31:06 compute-0 funny_greider[94341]: 
Jan 27 08:31:06 compute-0 funny_greider[94341]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502918246","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dkphsh","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.nigpsg","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.igzbmp","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 27 08:31:06 compute-0 podman[94281]: 2026-01-27 08:31:06.264659204 +0000 UTC m=+2.394756977 container died 73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892 (image=quay.io/ceph/ceph:v18, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:31:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:06 compute-0 systemd[1]: libpod-73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892.scope: Deactivated successfully.
Jan 27 08:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-49129b126d21d4b272b7b6796e4e1b2a344881857122086a0a520243f01ab24c-merged.mount: Deactivated successfully.
Jan 27 08:31:06 compute-0 systemd[1]: Reloading.
Jan 27 08:31:06 compute-0 systemd-rc-local-generator[94467]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:31:06 compute-0 systemd-sysv-generator[94472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:31:06 compute-0 podman[94281]: 2026-01-27 08:31:06.505966186 +0000 UTC m=+2.636063969 container remove 73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892 (image=quay.io/ceph/ceph:v18, name=funny_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:06 compute-0 sudo[94278]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:06 compute-0 systemd[1]: libpod-conmon-73737941942f5d23aeaca87075da3d5b3322a23ccd8215c355359d6597aba892.scope: Deactivated successfully.
Jan 27 08:31:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v123: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 127 KiB/s rd, 3.5 KiB/s wr, 235 op/s
Jan 27 08:31:06 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.knqeph for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:31:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/377515265' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 27 08:31:06 compute-0 podman[94528]: 2026-01-27 08:31:06.895840644 +0000 UTC m=+0.055011750 container create eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, release=1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public)
Jan 27 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebf47f8afbd8b258a40c714cc2d554f48bbbf90b8201b06aba091c684fcabd9/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:06 compute-0 podman[94528]: 2026-01-27 08:31:06.958296797 +0000 UTC m=+0.117467923 container init eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 27 08:31:06 compute-0 podman[94528]: 2026-01-27 08:31:06.864813722 +0000 UTC m=+0.023984848 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 27 08:31:06 compute-0 podman[94528]: 2026-01-27 08:31:06.963425891 +0000 UTC m=+0.122596997 container start eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, vcs-type=git, description=keepalived for Ceph, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 27 08:31:06 compute-0 bash[94528]: eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e
Jan 27 08:31:06 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.knqeph for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: Starting VRRP child process, pid=4
Jan 27 08:31:06 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: Startup complete
Jan 27 08:31:07 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:06 2026: (VI_0) Entering BACKUP STATE (init)
Jan 27 08:31:07 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:07 2026: VRRP_Script(check_backend) succeeded
Jan 27 08:31:07 compute-0 sudo[94087]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 27 08:31:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:07 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 27 08:31:07 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 27 08:31:07 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 27 08:31:07 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 27 08:31:07 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.hyzhbc on compute-2
Jan 27 08:31:07 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.hyzhbc on compute-2
Jan 27 08:31:07 compute-0 sudo[94575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkgxpptmozexyvmznjokoyipidctcie ; /usr/bin/python3'
Jan 27 08:31:07 compute-0 sudo[94575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:07 compute-0 python3[94577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:07.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:07 compute-0 podman[94578]: 2026-01-27 08:31:07.579923206 +0000 UTC m=+0.040420028 container create bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da (image=quay.io/ceph/ceph:v18, name=interesting_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:07 compute-0 systemd[1]: Started libpod-conmon-bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da.scope.
Jan 27 08:31:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c33383e7cc1cf6a4ebdb199898340fe996d49e4a00c43a98b84ddd5eac8f9be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c33383e7cc1cf6a4ebdb199898340fe996d49e4a00c43a98b84ddd5eac8f9be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:07 compute-0 podman[94578]: 2026-01-27 08:31:07.562826359 +0000 UTC m=+0.023323201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:07 compute-0 podman[94578]: 2026-01-27 08:31:07.666312456 +0000 UTC m=+0.126809298 container init bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da (image=quay.io/ceph/ceph:v18, name=interesting_morse, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:31:07 compute-0 podman[94578]: 2026-01-27 08:31:07.672551179 +0000 UTC m=+0.133048001 container start bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da (image=quay.io/ceph/ceph:v18, name=interesting_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:31:07 compute-0 podman[94578]: 2026-01-27 08:31:07.675565958 +0000 UTC m=+0.136062800 container attach bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da (image=quay.io/ceph/ceph:v18, name=interesting_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:31:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:07.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:07 compute-0 ceph-mon[74357]: pgmap v123: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 127 KiB/s rd, 3.5 KiB/s wr, 235 op/s
Jan 27 08:31:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:07 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 27 08:31:07 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 27 08:31:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 27 08:31:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256046237' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 27 08:31:08 compute-0 interesting_morse[94593]: mimic
Jan 27 08:31:08 compute-0 systemd[1]: libpod-bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da.scope: Deactivated successfully.
Jan 27 08:31:08 compute-0 podman[94578]: 2026-01-27 08:31:08.218306004 +0000 UTC m=+0.678802816 container died bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da (image=quay.io/ceph/ceph:v18, name=interesting_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c33383e7cc1cf6a4ebdb199898340fe996d49e4a00c43a98b84ddd5eac8f9be-merged.mount: Deactivated successfully.
Jan 27 08:31:08 compute-0 podman[94578]: 2026-01-27 08:31:08.337571572 +0000 UTC m=+0.798068394 container remove bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da (image=quay.io/ceph/ceph:v18, name=interesting_morse, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:08 compute-0 systemd[1]: libpod-conmon-bbc0b1b14d078e3c9abf5431a1c8c2e0150a79c76023c0da9cd74643ec56f4da.scope: Deactivated successfully.
Jan 27 08:31:08 compute-0 sudo[94575]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v124: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 115 KiB/s rd, 3.2 KiB/s wr, 213 op/s
Jan 27 08:31:08 compute-0 ceph-mon[74357]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 27 08:31:08 compute-0 ceph-mon[74357]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 27 08:31:08 compute-0 ceph-mon[74357]: Deploying daemon keepalived.rgw.default.compute-2.hyzhbc on compute-2
Jan 27 08:31:08 compute-0 ceph-mon[74357]: 3.1f scrub starts
Jan 27 08:31:08 compute-0 ceph-mon[74357]: 3.1f scrub ok
Jan 27 08:31:08 compute-0 ceph-mon[74357]: 3.10 scrub starts
Jan 27 08:31:08 compute-0 ceph-mon[74357]: 3.10 scrub ok
Jan 27 08:31:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1256046237' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 27 08:31:09 compute-0 sudo[94653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdkglrdryqzwmdpvbowtqyhfnrocqlf ; /usr/bin/python3'
Jan 27 08:31:09 compute-0 sudo[94653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:09 compute-0 python3[94655]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:09 compute-0 podman[94656]: 2026-01-27 08:31:09.412953919 +0000 UTC m=+0.050255195 container create 71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b (image=quay.io/ceph/ceph:v18, name=reverent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:31:09 compute-0 systemd[1]: Started libpod-conmon-71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b.scope.
Jan 27 08:31:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5352de9156aa43f2369b2a7fbb98d3c863cff3662d653f676b01f46f5f48a03f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5352de9156aa43f2369b2a7fbb98d3c863cff3662d653f676b01f46f5f48a03f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:09 compute-0 podman[94656]: 2026-01-27 08:31:09.391312023 +0000 UTC m=+0.028613349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:09 compute-0 podman[94656]: 2026-01-27 08:31:09.494253456 +0000 UTC m=+0.131554752 container init 71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b (image=quay.io/ceph/ceph:v18, name=reverent_greider, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:31:09 compute-0 podman[94656]: 2026-01-27 08:31:09.499539004 +0000 UTC m=+0.136840280 container start 71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b (image=quay.io/ceph/ceph:v18, name=reverent_greider, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:31:09 compute-0 podman[94656]: 2026-01-27 08:31:09.502505302 +0000 UTC m=+0.139806608 container attach 71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b (image=quay.io/ceph/ceph:v18, name=reverent_greider, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:09.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:09 compute-0 ceph-mon[74357]: 3.e scrub starts
Jan 27 08:31:09 compute-0 ceph-mon[74357]: 3.e scrub ok
Jan 27 08:31:09 compute-0 ceph-mon[74357]: 2.19 scrub starts
Jan 27 08:31:09 compute-0 ceph-mon[74357]: 2.19 scrub ok
Jan 27 08:31:09 compute-0 ceph-mon[74357]: pgmap v124: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 115 KiB/s rd, 3.2 KiB/s wr, 213 op/s
Jan 27 08:31:09 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 27 08:31:09 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 27 08:31:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 27 08:31:10 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014210263' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 27 08:31:10 compute-0 reverent_greider[94672]: 
Jan 27 08:31:10 compute-0 reverent_greider[94672]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":12}}
Jan 27 08:31:10 compute-0 systemd[1]: libpod-71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b.scope: Deactivated successfully.
Jan 27 08:31:10 compute-0 podman[94656]: 2026-01-27 08:31:10.09275263 +0000 UTC m=+0.730053916 container died 71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b (image=quay.io/ceph/ceph:v18, name=reverent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5352de9156aa43f2369b2a7fbb98d3c863cff3662d653f676b01f46f5f48a03f-merged.mount: Deactivated successfully.
Jan 27 08:31:10 compute-0 podman[94656]: 2026-01-27 08:31:10.142723278 +0000 UTC m=+0.780024574 container remove 71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b (image=quay.io/ceph/ceph:v18, name=reverent_greider, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:31:10 compute-0 systemd[1]: libpod-conmon-71e671f053b29369392a2ca26e39ccd928cab3161d1ff40856e7998c6dff319b.scope: Deactivated successfully.
Jan 27 08:31:10 compute-0 sudo[94653]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:10 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:10 2026: (VI_0) Entering MASTER STATE
Jan 27 08:31:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v125: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 2.8 KiB/s wr, 190 op/s
Jan 27 08:31:10 compute-0 ceph-mon[74357]: 3.13 scrub starts
Jan 27 08:31:10 compute-0 ceph-mon[74357]: 3.13 scrub ok
Jan 27 08:31:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2014210263' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 27 08:31:10 compute-0 ceph-mon[74357]: pgmap v125: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 2.8 KiB/s wr, 190 op/s
Jan 27 08:31:10 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 27 08:31:10 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 27 08:31:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 27 08:31:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:11.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:12 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev a8add5ba-243e-4d7a-a79c-8a847ef99565 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 27 08:31:12 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event a8add5ba-243e-4d7a-a79c-8a847ef99565 (Updating ingress.rgw.default deployment (+4 -> 4)) in 19 seconds
Jan 27 08:31:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 27 08:31:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v126: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:31:12 compute-0 ceph-mon[74357]: 2.b scrub starts
Jan 27 08:31:12 compute-0 ceph-mon[74357]: 2.b scrub ok
Jan 27 08:31:12 compute-0 ceph-mon[74357]: 3.14 scrub starts
Jan 27 08:31:12 compute-0 ceph-mon[74357]: 3.14 scrub ok
Jan 27 08:31:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:12 compute-0 sudo[94712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:12 compute-0 sudo[94712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:12 compute-0 sudo[94713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:12 compute-0 sudo[94712]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:12 compute-0 sudo[94713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:12 compute-0 sudo[94713]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:12 compute-0 sudo[94762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:31:12 compute-0 sudo[94762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:12 compute-0 sudo[94765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:12 compute-0 sudo[94762]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:12 compute-0 sudo[94765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:12 compute-0 sudo[94765]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:12 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 27 08:31:12 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 27 08:31:13 compute-0 sudo[94813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:13 compute-0 sudo[94813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:13 compute-0 sudo[94813]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:13 compute-0 sudo[94838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:13 compute-0 sudo[94838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:13 compute-0 sudo[94838]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:13 compute-0 sudo[94863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:13 compute-0 sudo[94863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:13 compute-0 sudo[94863]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:13 compute-0 sudo[94888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:31:13 compute-0 sudo[94888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:13.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:13.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:13 compute-0 ceph-mon[74357]: 3.15 scrub starts
Jan 27 08:31:13 compute-0 ceph-mon[74357]: 3.15 scrub ok
Jan 27 08:31:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:13 compute-0 ceph-mon[74357]: pgmap v126: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:31:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:13 compute-0 ceph-mon[74357]: 3.c scrub starts
Jan 27 08:31:13 compute-0 ceph-mon[74357]: 3.c scrub ok
Jan 27 08:31:13 compute-0 podman[94985]: 2026-01-27 08:31:13.844289114 +0000 UTC m=+0.054782914 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Jan 27 08:31:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Jan 27 08:31:13 compute-0 podman[94985]: 2026-01-27 08:31:13.941299322 +0000 UTC m=+0.151793092 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:31:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 podman[95122]: 2026-01-27 08:31:14.539810316 +0000 UTC m=+0.052119534 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 11 completed events
Jan 27 08:31:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:31:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 podman[95144]: 2026-01-27 08:31:14.647130353 +0000 UTC m=+0.088920037 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:14 compute-0 podman[95122]: 2026-01-27 08:31:14.667636409 +0000 UTC m=+0.179945647 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v127: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:31:14 compute-0 ceph-mon[74357]: 2.18 scrub starts
Jan 27 08:31:14 compute-0 ceph-mon[74357]: 2.18 scrub ok
Jan 27 08:31:14 compute-0 ceph-mon[74357]: 2.e scrub starts
Jan 27 08:31:14 compute-0 ceph-mon[74357]: 2.e scrub ok
Jan 27 08:31:14 compute-0 ceph-mon[74357]: 3.5 deep-scrub starts
Jan 27 08:31:14 compute-0 ceph-mon[74357]: 3.5 deep-scrub ok
Jan 27 08:31:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph[94544]: Tue Jan 27 08:31:14 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 27 08:31:14 compute-0 podman[95186]: 2026-01-27 08:31:14.916442967 +0000 UTC m=+0.099428722 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, release=1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container)
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:31:14
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.log']
Jan 27 08:31:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:31:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:14 compute-0 podman[95207]: 2026-01-27 08:31:14.983074239 +0000 UTC m=+0.049724191 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.expose-services=, release=1793, vendor=Red Hat, Inc.)
Jan 27 08:31:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:14 compute-0 podman[95186]: 2026-01-27 08:31:14.999493209 +0000 UTC m=+0.182478924 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2)
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:31:15 compute-0 sudo[94888]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 sudo[95221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:15 compute-0 sudo[95221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:15 compute-0 sudo[95221]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:15 compute-0 sudo[95246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:15 compute-0 sudo[95246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:15 compute-0 sudo[95246]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:15 compute-0 sudo[95271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:15 compute-0 sudo[95271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:15 compute-0 sudo[95271]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:15 compute-0 sudo[95296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:31:15 compute-0 sudo[95296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:15.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:15 compute-0 ceph-mon[74357]: pgmap v127: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:31:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 sudo[95296]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 62d7b62d-460e-4d92-af83-5b563258f7be does not exist
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a8dfead9-218c-4a70-8da6-8c3fc17cb1ab does not exist
Jan 27 08:31:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev fe670f32-a507-4cdf-8a49-714828fe837b does not exist
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:31:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:15 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 27 08:31:15 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 27 08:31:15 compute-0 sudo[95352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:15 compute-0 sudo[95352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:15 compute-0 sudo[95352]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:15 compute-0 sudo[95377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:15 compute-0 sudo[95377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:15 compute-0 sudo[95377]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:16 compute-0 sudo[95402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:16 compute-0 sudo[95402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:16 compute-0 sudo[95402]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:16 compute-0 sudo[95427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:31:16 compute-0 sudo[95427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.430897478 +0000 UTC m=+0.048428247 container create d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:31:16 compute-0 systemd[1]: Started libpod-conmon-d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705.scope.
Jan 27 08:31:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.407464295 +0000 UTC m=+0.024995144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.505054778 +0000 UTC m=+0.122585577 container init d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.512977376 +0000 UTC m=+0.130508165 container start d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.516472887 +0000 UTC m=+0.134003666 container attach d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:31:16 compute-0 goofy_bouman[95507]: 167 167
Jan 27 08:31:16 compute-0 systemd[1]: libpod-d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705.scope: Deactivated successfully.
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.519065885 +0000 UTC m=+0.136596664 container died d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3adf9b75c38999f80603512b64137bd2088a9a6ac9a4e0bce83b2dc18fdeeb69-merged.mount: Deactivated successfully.
Jan 27 08:31:16 compute-0 podman[95491]: 2026-01-27 08:31:16.559331397 +0000 UTC m=+0.176862166 container remove d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:31:16 compute-0 systemd[1]: libpod-conmon-d69a0e2b1c3d9b674a778bde1446dc241346354ba804aad33a6c1b767df70705.scope: Deactivated successfully.
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:31:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v128: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:31:16 compute-0 podman[95532]: 2026-01-27 08:31:16.723580294 +0000 UTC m=+0.045116691 container create 6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mon[74357]: 3.16 scrub starts
Jan 27 08:31:16 compute-0 ceph-mon[74357]: 3.16 scrub ok
Jan 27 08:31:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:16 compute-0 systemd[1]: Started libpod-conmon-6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce.scope.
Jan 27 08:31:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4156486a425254bbd8e8403baf386ed4614bf15189c1f6b9a44c5230c8d25d3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4156486a425254bbd8e8403baf386ed4614bf15189c1f6b9a44c5230c8d25d3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4156486a425254bbd8e8403baf386ed4614bf15189c1f6b9a44c5230c8d25d3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4156486a425254bbd8e8403baf386ed4614bf15189c1f6b9a44c5230c8d25d3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4156486a425254bbd8e8403baf386ed4614bf15189c1f6b9a44c5230c8d25d3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:16 compute-0 podman[95532]: 2026-01-27 08:31:16.706514857 +0000 UTC m=+0.028051274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:16 compute-0 podman[95532]: 2026-01-27 08:31:16.817160561 +0000 UTC m=+0.138696998 container init 6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:16 compute-0 podman[95532]: 2026-01-27 08:31:16.831148757 +0000 UTC m=+0.152685164 container start 6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:16 compute-0 podman[95532]: 2026-01-27 08:31:16.834489994 +0000 UTC m=+0.156026561 container attach 6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 27 08:31:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 27 08:31:16 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 27 08:31:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:16 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev b994f03c-bd3f-47a2-96e8-075aca709896 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 27 08:31:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:17 compute-0 magical_wescoff[95548]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:31:17 compute-0 magical_wescoff[95548]: --> relative data size: 1.0
Jan 27 08:31:17 compute-0 magical_wescoff[95548]: --> All data devices are unavailable
Jan 27 08:31:17 compute-0 systemd[1]: libpod-6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce.scope: Deactivated successfully.
Jan 27 08:31:17 compute-0 podman[95532]: 2026-01-27 08:31:17.629523589 +0000 UTC m=+0.951059986 container died 6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4156486a425254bbd8e8403baf386ed4614bf15189c1f6b9a44c5230c8d25d3d-merged.mount: Deactivated successfully.
Jan 27 08:31:17 compute-0 podman[95532]: 2026-01-27 08:31:17.6826942 +0000 UTC m=+1.004230597 container remove 6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:17 compute-0 systemd[1]: libpod-conmon-6a9d52e57c85ceda0c5dd77a7a5ba689be04401d9d9ef2a15e8c27506f8af1ce.scope: Deactivated successfully.
Jan 27 08:31:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:17.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:17 compute-0 sudo[95427]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:17 compute-0 ceph-mon[74357]: pgmap v128: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:31:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:17 compute-0 ceph-mon[74357]: osdmap e46: 3 total, 3 up, 3 in
Jan 27 08:31:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:17 compute-0 sudo[95578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:17 compute-0 sudo[95578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:17 compute-0 sudo[95578]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:17 compute-0 sudo[95603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:17 compute-0 sudo[95603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:17 compute-0 sudo[95603]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 27 08:31:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 27 08:31:17 compute-0 sudo[95628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 27 08:31:17 compute-0 sudo[95628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:17 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 7be78d11-05c4-4980-b1dc-dd60dba85379 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 27 08:31:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Jan 27 08:31:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 27 08:31:17 compute-0 sudo[95628]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:17 compute-0 sudo[95653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:31:17 compute-0 sudo[95653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.27530298 +0000 UTC m=+0.035078909 container create 27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:31:18 compute-0 systemd[1]: Started libpod-conmon-27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d.scope.
Jan 27 08:31:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.340493335 +0000 UTC m=+0.100269264 container init 27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.347344964 +0000 UTC m=+0.107120933 container start 27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:18 compute-0 elastic_aryabhata[95735]: 167 167
Jan 27 08:31:18 compute-0 systemd[1]: libpod-27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d.scope: Deactivated successfully.
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.351192635 +0000 UTC m=+0.110968564 container attach 27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.351490853 +0000 UTC m=+0.111266782 container died 27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.260829172 +0000 UTC m=+0.020605101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-42f93b15740b98a24bee653c7c760d0b7c48011dd7ae9ec669da4b00bfb4c4ba-merged.mount: Deactivated successfully.
Jan 27 08:31:18 compute-0 podman[95718]: 2026-01-27 08:31:18.381914939 +0000 UTC m=+0.141690908 container remove 27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:31:18 compute-0 systemd[1]: libpod-conmon-27c4fc8e7570813ffffd49729bd114289bf57b02e613f7c7a886a83d7cfa100d.scope: Deactivated successfully.
Jan 27 08:31:18 compute-0 podman[95759]: 2026-01-27 08:31:18.534472438 +0000 UTC m=+0.051211200 container create bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kirch, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 08:31:18 compute-0 systemd[1]: Started libpod-conmon-bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30.scope.
Jan 27 08:31:18 compute-0 podman[95759]: 2026-01-27 08:31:18.508000886 +0000 UTC m=+0.024739678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b331add4a97cf8bd48834a0fcaceca0912095e878e12286b6fbe5efb91e09c9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b331add4a97cf8bd48834a0fcaceca0912095e878e12286b6fbe5efb91e09c9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b331add4a97cf8bd48834a0fcaceca0912095e878e12286b6fbe5efb91e09c9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b331add4a97cf8bd48834a0fcaceca0912095e878e12286b6fbe5efb91e09c9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:18 compute-0 podman[95759]: 2026-01-27 08:31:18.619563215 +0000 UTC m=+0.136301987 container init bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:18 compute-0 podman[95759]: 2026-01-27 08:31:18.627193303 +0000 UTC m=+0.143932025 container start bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kirch, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:18 compute-0 podman[95759]: 2026-01-27 08:31:18.630662725 +0000 UTC m=+0.147401497 container attach bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:31:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v131: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 27 08:31:18 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 4085f361-f477-4ae0-a879-4c8bd64c5c98 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 27 08:31:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:18 compute-0 ceph-mon[74357]: osdmap e47: 3 total, 3 up, 3 in
Jan 27 08:31:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 27 08:31:18 compute-0 ceph-mon[74357]: 3.1d scrub starts
Jan 27 08:31:18 compute-0 ceph-mon[74357]: 3.1d scrub ok
Jan 27 08:31:18 compute-0 ceph-mon[74357]: pgmap v131: 73 pgs: 73 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:19 compute-0 nervous_kirch[95775]: {
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:     "0": [
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:         {
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "devices": [
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "/dev/loop3"
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             ],
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "lv_name": "ceph_lv0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "lv_size": "7511998464",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "name": "ceph_lv0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "tags": {
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.cluster_name": "ceph",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.crush_device_class": "",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.encrypted": "0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.osd_id": "0",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.type": "block",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:                 "ceph.vdo": "0"
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             },
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "type": "block",
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:             "vg_name": "ceph_vg0"
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:         }
Jan 27 08:31:19 compute-0 nervous_kirch[95775]:     ]
Jan 27 08:31:19 compute-0 nervous_kirch[95775]: }
Jan 27 08:31:19 compute-0 systemd[1]: libpod-bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30.scope: Deactivated successfully.
Jan 27 08:31:19 compute-0 podman[95759]: 2026-01-27 08:31:19.400379856 +0000 UTC m=+0.917118588 container died bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b331add4a97cf8bd48834a0fcaceca0912095e878e12286b6fbe5efb91e09c9f-merged.mount: Deactivated successfully.
Jan 27 08:31:19 compute-0 podman[95759]: 2026-01-27 08:31:19.453796454 +0000 UTC m=+0.970535176 container remove bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:31:19 compute-0 systemd[1]: libpod-conmon-bbd325db9a732470028e3e84a11e29a1e19893cd7296512dc9ea0f1b928a1d30.scope: Deactivated successfully.
Jan 27 08:31:19 compute-0 sudo[95653]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:19 compute-0 sudo[95797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:19 compute-0 sudo[95797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:19 compute-0 sudo[95797]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:19 compute-0 ceph-mgr[74650]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Jan 27 08:31:19 compute-0 sudo[95822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:19 compute-0 sudo[95822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:19 compute-0 sudo[95822]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:19 compute-0 sudo[95847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:19 compute-0 sudo[95847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:19 compute-0 sudo[95847]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:19 compute-0 sudo[95872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:31:19 compute-0 sudo[95872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 27 08:31:19 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 27 08:31:19 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 27 08:31:19 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 47c6e1b4-1601-4605-bfe1-3e8c8dc15222 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 27 08:31:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:19 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:19 compute-0 ceph-mon[74357]: 2.6 scrub starts
Jan 27 08:31:19 compute-0 ceph-mon[74357]: 2.6 scrub ok
Jan 27 08:31:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 27 08:31:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:19 compute-0 ceph-mon[74357]: osdmap e48: 3 total, 3 up, 3 in
Jan 27 08:31:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:20 compute-0 podman[95937]: 2026-01-27 08:31:20.110793478 +0000 UTC m=+0.049412774 container create 3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:20 compute-0 systemd[1]: Started libpod-conmon-3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1.scope.
Jan 27 08:31:20 compute-0 podman[95937]: 2026-01-27 08:31:20.083484914 +0000 UTC m=+0.022104290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:20 compute-0 podman[95937]: 2026-01-27 08:31:20.194311703 +0000 UTC m=+0.132931049 container init 3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:20 compute-0 podman[95937]: 2026-01-27 08:31:20.201241404 +0000 UTC m=+0.139860710 container start 3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:31:20 compute-0 podman[95937]: 2026-01-27 08:31:20.205222508 +0000 UTC m=+0.143841814 container attach 3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:31:20 compute-0 jolly_ganguly[95954]: 167 167
Jan 27 08:31:20 compute-0 systemd[1]: libpod-3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1.scope: Deactivated successfully.
Jan 27 08:31:20 compute-0 podman[95959]: 2026-01-27 08:31:20.249739872 +0000 UTC m=+0.028792144 container died 3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-80c431f0d5f0a1979477b4dfa64d8a6682be75df1a95d4b327261bbb23118d0f-merged.mount: Deactivated successfully.
Jan 27 08:31:20 compute-0 podman[95959]: 2026-01-27 08:31:20.284162593 +0000 UTC m=+0.063214855 container remove 3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:31:20 compute-0 systemd[1]: libpod-conmon-3ec850444a42944212e858e88aa2e4ced2097ffee76c76d9d2cfa03a8c7300b1.scope: Deactivated successfully.
Jan 27 08:31:20 compute-0 podman[95981]: 2026-01-27 08:31:20.438996032 +0000 UTC m=+0.040074549 container create 853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:31:20 compute-0 systemd[1]: Started libpod-conmon-853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5.scope.
Jan 27 08:31:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5831ce7a9053901495d0fe68800235e1f83ca0163be3504c86c66adfc9dc3453/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5831ce7a9053901495d0fe68800235e1f83ca0163be3504c86c66adfc9dc3453/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5831ce7a9053901495d0fe68800235e1f83ca0163be3504c86c66adfc9dc3453/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5831ce7a9053901495d0fe68800235e1f83ca0163be3504c86c66adfc9dc3453/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:20 compute-0 podman[95981]: 2026-01-27 08:31:20.419219495 +0000 UTC m=+0.020298042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:20 compute-0 podman[95981]: 2026-01-27 08:31:20.523594145 +0000 UTC m=+0.124672692 container init 853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:20 compute-0 podman[95981]: 2026-01-27 08:31:20.538058313 +0000 UTC m=+0.139136830 container start 853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:20 compute-0 podman[95981]: 2026-01-27 08:31:20.541855133 +0000 UTC m=+0.142933640 container attach 853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v134: 135 pgs: 1 peering, 31 unknown, 103 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 27 08:31:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 27 08:31:20 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 6dfdf0bb-e915-45f6-8efe-6761fbf7b706 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 27 08:31:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:20 compute-0 ceph-mon[74357]: 2.9 deep-scrub starts
Jan 27 08:31:20 compute-0 ceph-mon[74357]: 2.9 deep-scrub ok
Jan 27 08:31:20 compute-0 ceph-mon[74357]: 2.f deep-scrub starts
Jan 27 08:31:20 compute-0 ceph-mon[74357]: 2.f deep-scrub ok
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: osdmap e49: 3 total, 3 up, 3 in
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:20 compute-0 ceph-mon[74357]: pgmap v134: 135 pgs: 1 peering, 31 unknown, 103 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 27 08:31:20 compute-0 ceph-mon[74357]: osdmap e50: 3 total, 3 up, 3 in
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 50 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=50 pruub=14.399680138s) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active pruub 124.968711853s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 50 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=50 pruub=14.399680138s) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown pruub 124.968711853s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:21 compute-0 objective_lumiere[95997]: {
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:         "osd_id": 0,
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:         "type": "bluestore"
Jan 27 08:31:21 compute-0 objective_lumiere[95997]:     }
Jan 27 08:31:21 compute-0 objective_lumiere[95997]: }
Jan 27 08:31:21 compute-0 systemd[1]: libpod-853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5.scope: Deactivated successfully.
Jan 27 08:31:21 compute-0 podman[95981]: 2026-01-27 08:31:21.452597014 +0000 UTC m=+1.053675581 container died 853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5831ce7a9053901495d0fe68800235e1f83ca0163be3504c86c66adfc9dc3453-merged.mount: Deactivated successfully.
Jan 27 08:31:21 compute-0 podman[95981]: 2026-01-27 08:31:21.502160131 +0000 UTC m=+1.103238628 container remove 853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:31:21 compute-0 systemd[1]: libpod-conmon-853c2acd87017c2f92a8e25deb6042c11f4ed7308a0b64bd55c79ca2c8897bb5.scope: Deactivated successfully.
Jan 27 08:31:21 compute-0 sudo[95872]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f29b72b6-d13c-41d6-b0eb-c417f648e6b1 does not exist
Jan 27 08:31:21 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 355fda95-627b-4182-9188-850566c472a4 (Updating mds.cephfs deployment (+3 -> 3))
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jocsot", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jocsot", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jocsot", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:21 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.jocsot on compute-2
Jan 27 08:31:21 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.jocsot on compute-2
Jan 27 08:31:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:21.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 27 08:31:21 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev 719478cb-5730-432e-9670-d619b7ee0071 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 27 08:31:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1d( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1a( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.19( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1e( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1c( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.16( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.c( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.15( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.a( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.e( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.d( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.2( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.7( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jocsot", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jocsot", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.8( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.b( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:21 compute-0 ceph-mon[74357]: Deploying daemon mds.cephfs.compute-2.jocsot on compute-2
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.17( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.11( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.10( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-mon[74357]: osdmap e51: 3 total, 3 up, 3 in
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.12( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.5( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.14( empty local-lis/les=24/25 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1d( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.19( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.16( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1e( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.d( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.0( empty local-lis/les=50/51 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.7( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.b( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.17( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.10( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.14( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 51 pg[7.12( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=24/24 les/c/f=25/25/0 sis=50) [0] r=0 lpr=50 pi=[24,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v137: 181 pgs: 1 peering, 77 unknown, 103 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:22 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Jan 27 08:31:22 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Jan 27 08:31:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 27 08:31:22 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev ac5e334c-e740-4db4-8e0d-36dfab02b092 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 27 08:31:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 27 08:31:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:22 compute-0 ceph-mon[74357]: pgmap v137: 181 pgs: 1 peering, 77 unknown, 103 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:22 compute-0 ceph-mon[74357]: osdmap e52: 3 total, 3 up, 3 in
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ceuaum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ceuaum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ceuaum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:23 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ceuaum on compute-0
Jan 27 08:31:23 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ceuaum on compute-0
Jan 27 08:31:23 compute-0 sudo[96034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:23 compute-0 sudo[96034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:23 compute-0 sudo[96034]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:23 compute-0 sudo[96059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:23 compute-0 sudo[96059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:23 compute-0 sudo[96059]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:23.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:23 compute-0 sudo[96084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:23 compute-0 sudo[96084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:23 compute-0 sudo[96084]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:23 compute-0 sudo[96109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:31:23 compute-0 sudo[96109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 27 08:31:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 27 08:31:23 compute-0 podman[96177]: 2026-01-27 08:31:23.95628647 +0000 UTC m=+0.049201778 container create 52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 27 08:31:23 compute-0 systemd[1]: Started libpod-conmon-52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55.scope.
Jan 27 08:31:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 27 08:31:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] update: starting ev ac81c33d-065d-4140-afd1-e0419fc6e14c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev b994f03c-bd3f-47a2-96e8-075aca709896 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event b994f03c-bd3f-47a2-96e8-075aca709896 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 7be78d11-05c4-4980-b1dc-dd60dba85379 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 7be78d11-05c4-4980-b1dc-dd60dba85379 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 4085f361-f477-4ae0-a879-4c8bd64c5c98 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 4085f361-f477-4ae0-a879-4c8bd64c5c98 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 47c6e1b4-1601-4605-bfe1-3e8c8dc15222 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 47c6e1b4-1601-4605-bfe1-3e8c8dc15222 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 6dfdf0bb-e915-45f6-8efe-6761fbf7b706 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 6dfdf0bb-e915-45f6-8efe-6761fbf7b706 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 719478cb-5730-432e-9670-d619b7ee0071 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 719478cb-5730-432e-9670-d619b7ee0071 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev ac5e334c-e740-4db4-8e0d-36dfab02b092 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event ac5e334c-e740-4db4-8e0d-36dfab02b092 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev ac81c33d-065d-4140-afd1-e0419fc6e14c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event ac81c33d-065d-4140-afd1-e0419fc6e14c (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 27 08:31:24 compute-0 podman[96177]: 2026-01-27 08:31:23.929325485 +0000 UTC m=+0.022240833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:24 compute-0 ceph-mon[74357]: 7.1 deep-scrub starts
Jan 27 08:31:24 compute-0 ceph-mon[74357]: 7.1 deep-scrub ok
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ceuaum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ceuaum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:31:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:24 compute-0 ceph-mon[74357]: Deploying daemon mds.cephfs.compute-0.ceuaum on compute-0
Jan 27 08:31:24 compute-0 podman[96177]: 2026-01-27 08:31:24.030624464 +0000 UTC m=+0.123539832 container init 52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:31:24 compute-0 podman[96177]: 2026-01-27 08:31:24.037812593 +0000 UTC m=+0.130727921 container start 52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:31:24 compute-0 cranky_hofstadter[96194]: 167 167
Jan 27 08:31:24 compute-0 podman[96177]: 2026-01-27 08:31:24.041651883 +0000 UTC m=+0.134567201 container attach 52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e3 new map
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:30:43.081110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.jocsot{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:24 compute-0 systemd[1]: libpod-52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55.scope: Deactivated successfully.
Jan 27 08:31:24 compute-0 podman[96177]: 2026-01-27 08:31:24.043642985 +0000 UTC m=+0.136558313 container died 52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] up:boot
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] as mds.0
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.jocsot assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.jocsot"} v 0) v1
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.jocsot"}]: dispatch
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e3 all = 0
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e4 new map
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:24.048005+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:creating}
Jan 27 08:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cfef2ada1a23981032a3f5b9d9d3e04c46d02a6a0f054a7c4fb41d4f5bbccc6-merged.mount: Deactivated successfully.
Jan 27 08:31:24 compute-0 podman[96177]: 2026-01-27 08:31:24.083336953 +0000 UTC m=+0.176252271 container remove 52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.jocsot is now active in filesystem cephfs as rank 0
Jan 27 08:31:24 compute-0 systemd[1]: libpod-conmon-52dd4230778531d73aef115832aff69ca833f38884b37b9bb0723301ab887e55.scope: Deactivated successfully.
Jan 27 08:31:24 compute-0 systemd[1]: Reloading.
Jan 27 08:31:24 compute-0 systemd-rc-local-generator[96234]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:31:24 compute-0 systemd-sysv-generator[96237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:31:24 compute-0 systemd[1]: Reloading.
Jan 27 08:31:24 compute-0 systemd-rc-local-generator[96282]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:31:24 compute-0 systemd-sysv-generator[96285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 19 completed events
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v140: 243 pgs: 1 peering, 139 unknown, 103 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:24 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ceuaum for 281e9bde-2795-59f4-98ac-90cf5b49a2de...
Jan 27 08:31:24 compute-0 podman[96343]: 2026-01-27 08:31:24.952606479 +0000 UTC m=+0.036480125 container create 35f95cce580b18f68b6fea64a220a241f78c1ad112abd9adbb49d0a17509efc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mds-cephfs-compute-0-ceuaum, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/956e299366a629e522450de166e4a2dc5472d3fb0fc039c240f402818547ad1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/956e299366a629e522450de166e4a2dc5472d3fb0fc039c240f402818547ad1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/956e299366a629e522450de166e4a2dc5472d3fb0fc039c240f402818547ad1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/956e299366a629e522450de166e4a2dc5472d3fb0fc039c240f402818547ad1b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ceuaum supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:25 compute-0 podman[96343]: 2026-01-27 08:31:25.017776334 +0000 UTC m=+0.101650000 container init 35f95cce580b18f68b6fea64a220a241f78c1ad112abd9adbb49d0a17509efc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mds-cephfs-compute-0-ceuaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:25 compute-0 podman[96343]: 2026-01-27 08:31:25.022719993 +0000 UTC m=+0.106593629 container start 35f95cce580b18f68b6fea64a220a241f78c1ad112abd9adbb49d0a17509efc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mds-cephfs-compute-0-ceuaum, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:31:25 compute-0 bash[96343]: 35f95cce580b18f68b6fea64a220a241f78c1ad112abd9adbb49d0a17509efc2
Jan 27 08:31:25 compute-0 podman[96343]: 2026-01-27 08:31:24.935950393 +0000 UTC m=+0.019824069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:25 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ceuaum for 281e9bde-2795-59f4-98ac-90cf5b49a2de.
Jan 27 08:31:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:31:25 compute-0 ceph-mon[74357]: osdmap e53: 3 total, 3 up, 3 in
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] up:boot
Jan 27 08:31:25 compute-0 ceph-mon[74357]: daemon mds.cephfs.compute-2.jocsot assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 27 08:31:25 compute-0 ceph-mon[74357]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 27 08:31:25 compute-0 ceph-mon[74357]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 27 08:31:25 compute-0 ceph-mon[74357]: Cluster is now healthy
Jan 27 08:31:25 compute-0 ceph-mon[74357]: fsmap cephfs:0 1 up:standby
Jan 27 08:31:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.jocsot"}]: dispatch
Jan 27 08:31:25 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:creating}
Jan 27 08:31:25 compute-0 ceph-mon[74357]: daemon mds.cephfs.compute-2.jocsot is now active in filesystem cephfs as rank 0
Jan 27 08:31:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:25 compute-0 ceph-mon[74357]: pgmap v140: 243 pgs: 1 peering, 139 unknown, 103 active+clean; 455 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:25 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:25 compute-0 sudo[96109]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e5 new map
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:25.055396+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] up:active
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active}
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:25 compute-0 ceph-mds[96364]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:31:25 compute-0 ceph-mds[96364]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 27 08:31:25 compute-0 ceph-mds[96364]: main not setting numa affinity
Jan 27 08:31:25 compute-0 ceph-mds[96364]: pidfile_write: ignore empty --pid-file
Jan 27 08:31:25 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mds-cephfs-compute-0-ceuaum[96359]: starting mds.cephfs.compute-0.ceuaum at 
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:25 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Updating MDS map to version 5 from mon.0
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.taxacd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.taxacd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.taxacd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:25 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.taxacd on compute-1
Jan 27 08:31:25 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.taxacd on compute-1
Jan 27 08:31:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:25.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 27 08:31:25 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 27 08:31:25 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 54 pg[10.0( v 42'48 (0'0,42'48] local-lis/les=41/42 n=8 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=54 pruub=9.265714645s) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 42'47 mlcod 42'47 active pruub 124.397308350s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:25 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 54 pg[10.0( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=54 pruub=9.265714645s) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 42'47 mlcod 0'0 unknown pruub 124.397308350s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:25.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] up:active
Jan 27 08:31:26 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active}
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.taxacd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.taxacd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:26 compute-0 ceph-mon[74357]: Deploying daemon mds.cephfs.compute-1.taxacd on compute-1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:31:26 compute-0 ceph-mon[74357]: osdmap e54: 3 total, 3 up, 3 in
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e6 new map
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:25.055396+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ceuaum{-1:14415} state up:standby seq 1 addr [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Updating MDS map to version 6 from mon.0
Jan 27 08:31:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Monitors have assigned me to become a standby.
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] up:boot
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 1 up:standby
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ceuaum"} v 0) v1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ceuaum"}]: dispatch
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e6 all = 0
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e7 new map
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:25.055396+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ceuaum{-1:14415} state up:standby seq 1 addr [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 1 up:standby
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mgr[74650]: [progress INFO root] complete: finished ev 355fda95-627b-4182-9188-850566c472a4 (Updating mds.cephfs deployment (+3 -> 3))
Jan 27 08:31:26 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event 355fda95-627b-4182-9188-850566c472a4 (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 27 08:31:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 3.5 KiB/s wr, 10 op/s
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.12( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1f( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1e( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1d( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1a( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.6( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.5( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.19( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1c( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.b( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.4( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.8( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.a( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.c( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.d( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.f( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.3( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.15( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.e( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.9( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.7( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.2( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=41/42 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.18( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1b( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.14( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.17( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.16( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.10( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.13( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.11( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1d( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1f( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.b( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1a( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.a( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.c( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.0( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 42'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.6( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.15( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.9( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.3( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.d( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.e( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1c( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.7( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.17( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.16( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.14( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 55 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 27 08:31:26 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:26 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 40b04714-8f51-4876-bdd8-4c4dca5ef0fb does not exist
Jan 27 08:31:26 compute-0 sudo[96384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:26 compute-0 sudo[96384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:26 compute-0 sudo[96384]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:26 compute-0 sudo[96409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:31:26 compute-0 sudo[96409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:26 compute-0 sudo[96409]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:26 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 27 08:31:26 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 27 08:31:27 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] up:boot
Jan 27 08:31:27 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 1 up:standby
Jan 27 08:31:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ceuaum"}]: dispatch
Jan 27 08:31:27 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 1 up:standby
Jan 27 08:31:27 compute-0 ceph-mon[74357]: 2.1f scrub starts
Jan 27 08:31:27 compute-0 ceph-mon[74357]: 2.1f scrub ok
Jan 27 08:31:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:27 compute-0 ceph-mon[74357]: pgmap v143: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 3.5 KiB/s wr, 10 op/s
Jan 27 08:31:27 compute-0 ceph-mon[74357]: osdmap e55: 3 total, 3 up, 3 in
Jan 27 08:31:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:27 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e8 new map
Jan 27 08:31:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:25.055396+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ceuaum{-1:14415} state up:standby seq 1 addr [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.taxacd{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:27 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] up:boot
Jan 27 08:31:27 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.taxacd"} v 0) v1
Jan 27 08:31:27 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.taxacd"}]: dispatch
Jan 27 08:31:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e8 all = 0
Jan 27 08:31:27 compute-0 sudo[96435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:27 compute-0 sudo[96435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:27 compute-0 sudo[96435]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:27 compute-0 sudo[96460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:27 compute-0 sudo[96460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:27 compute-0 sudo[96460]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:27 compute-0 sudo[96485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:27 compute-0 sudo[96485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:27 compute-0 sudo[96485]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:27 compute-0 sudo[96510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:31:27 compute-0 sudo[96510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:27.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:27 compute-0 podman[96608]: 2026-01-27 08:31:27.911277185 +0000 UTC m=+0.049053063 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:31:28 compute-0 podman[96608]: 2026-01-27 08:31:28.01884734 +0000 UTC m=+0.156623238 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:28 compute-0 ceph-mon[74357]: 7.2 scrub starts
Jan 27 08:31:28 compute-0 ceph-mon[74357]: 7.2 scrub ok
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] up:boot
Jan 27 08:31:28 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.taxacd"}]: dispatch
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:28 compute-0 podman[96744]: 2026-01-27 08:31:28.482932987 +0000 UTC m=+0.053168581 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:28 compute-0 podman[96744]: 2026-01-27 08:31:28.493236307 +0000 UTC m=+0.063471921 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:28 compute-0 podman[96806]: 2026-01-27 08:31:28.683265848 +0000 UTC m=+0.054494736 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, vcs-type=git)
Jan 27 08:31:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s wr, 7 op/s
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e9 new map
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:28.689926+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ceuaum{-1:14415} state up:standby seq 1 addr [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.taxacd{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:28 compute-0 podman[96806]: 2026-01-27 08:31:28.698314721 +0000 UTC m=+0.069543589 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.openshift.expose-services=, version=2.2.4, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 27 08:31:28 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] up:active
Jan 27 08:31:28 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:28 compute-0 sudo[96510]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:28 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 27 08:31:28 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9f6b69ea-5e1c-4c31-83f4-7bf2894782a1 does not exist
Jan 27 08:31:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 28618322-c054-4a5c-9fe2-689a01bb8121 does not exist
Jan 27 08:31:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5fcde7a7-64a7-42c6-a589-75b7659da489 does not exist
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: pgmap v144: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s wr, 7 op/s
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] up:active
Jan 27 08:31:29 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:31:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:29 compute-0 sudo[96856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:29 compute-0 sudo[96856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:29 compute-0 sudo[96856]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:29 compute-0 sudo[96881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:29 compute-0 sudo[96881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:29 compute-0 sudo[96881]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:29 compute-0 sudo[96906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:29 compute-0 sudo[96906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:29 compute-0 sudo[96906]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:29 compute-0 sudo[96931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:31:29 compute-0 sudo[96931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:29.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:29 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 20 completed events
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.829121578 +0000 UTC m=+0.048828118 container create a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e10 new map
Jan 27 08:31:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:28.689926+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ceuaum{-1:14415} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.taxacd{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:29 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Updating MDS map to version 10 from mon.0
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] up:standby
Jan 27 08:31:29 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:29 compute-0 systemd[1]: Started libpod-conmon-a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e.scope.
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.808218351 +0000 UTC m=+0.027924901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:29 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.922050658 +0000 UTC m=+0.141757198 container init a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:31:29 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.932120852 +0000 UTC m=+0.151827362 container start a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:29 compute-0 brave_bell[97013]: 167 167
Jan 27 08:31:29 compute-0 systemd[1]: libpod-a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e.scope: Deactivated successfully.
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.935995263 +0000 UTC m=+0.155701773 container attach a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.936490906 +0000 UTC m=+0.156197396 container died a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7464fea38c0492c73eba5e9770dfb18fed4acc9d30f5a263266797cc9a9f470-merged.mount: Deactivated successfully.
Jan 27 08:31:29 compute-0 podman[96997]: 2026-01-27 08:31:29.973992077 +0000 UTC m=+0.193698587 container remove a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:31:29 compute-0 systemd[1]: libpod-conmon-a5707d8060ff3f4fd59ed3646cd1dbed0da9de46236923e75a6e33aabf5f093e.scope: Deactivated successfully.
Jan 27 08:31:30 compute-0 podman[97036]: 2026-01-27 08:31:30.209483986 +0000 UTC m=+0.104378390 container create 7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:31:30 compute-0 podman[97036]: 2026-01-27 08:31:30.149035065 +0000 UTC m=+0.043929539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:30 compute-0 systemd[1]: Started libpod-conmon-7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07.scope.
Jan 27 08:31:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd66c043d1f9a0943125babc9a4b19e1cf76343ee23f7ffea2cd7d1c4a11352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd66c043d1f9a0943125babc9a4b19e1cf76343ee23f7ffea2cd7d1c4a11352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd66c043d1f9a0943125babc9a4b19e1cf76343ee23f7ffea2cd7d1c4a11352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd66c043d1f9a0943125babc9a4b19e1cf76343ee23f7ffea2cd7d1c4a11352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd66c043d1f9a0943125babc9a4b19e1cf76343ee23f7ffea2cd7d1c4a11352/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:30 compute-0 podman[97036]: 2026-01-27 08:31:30.344620991 +0000 UTC m=+0.239515505 container init 7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:31:30 compute-0 ceph-mon[74357]: 7.3 scrub starts
Jan 27 08:31:30 compute-0 ceph-mon[74357]: 7.3 scrub ok
Jan 27 08:31:30 compute-0 ceph-mon[74357]: 2.1e scrub starts
Jan 27 08:31:30 compute-0 ceph-mon[74357]: 2.1e scrub ok
Jan 27 08:31:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:30 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] up:standby
Jan 27 08:31:30 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:30 compute-0 ceph-mon[74357]: 2.12 scrub starts
Jan 27 08:31:30 compute-0 ceph-mon[74357]: 2.12 scrub ok
Jan 27 08:31:30 compute-0 podman[97036]: 2026-01-27 08:31:30.353742919 +0000 UTC m=+0.248637333 container start 7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 08:31:30 compute-0 podman[97036]: 2026-01-27 08:31:30.372206143 +0000 UTC m=+0.267100537 container attach 7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s wr, 6 op/s
Jan 27 08:31:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 27 08:31:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 27 08:31:31 compute-0 distracted_boyd[97052]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:31:31 compute-0 distracted_boyd[97052]: --> relative data size: 1.0
Jan 27 08:31:31 compute-0 distracted_boyd[97052]: --> All data devices are unavailable
Jan 27 08:31:31 compute-0 systemd[1]: libpod-7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07.scope: Deactivated successfully.
Jan 27 08:31:31 compute-0 podman[97036]: 2026-01-27 08:31:31.186973973 +0000 UTC m=+1.081868437 container died 7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd66c043d1f9a0943125babc9a4b19e1cf76343ee23f7ffea2cd7d1c4a11352-merged.mount: Deactivated successfully.
Jan 27 08:31:31 compute-0 podman[97036]: 2026-01-27 08:31:31.254379216 +0000 UTC m=+1.149273650 container remove 7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:31 compute-0 systemd[1]: libpod-conmon-7a5234077664b8fe0c76dadb161be7da37b9683bb262418985c37da72ed5ce07.scope: Deactivated successfully.
Jan 27 08:31:31 compute-0 sudo[96931]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:31 compute-0 sudo[97081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:31 compute-0 sudo[97081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:31 compute-0 sudo[97081]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:31 compute-0 sudo[97106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:31 compute-0 sudo[97106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:31 compute-0 sudo[97106]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:31 compute-0 sudo[97131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:31 compute-0 sudo[97131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:31 compute-0 sudo[97131]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:31 compute-0 sudo[97156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:31:31 compute-0 sudo[97156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:31.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e11 new map
Jan 27 08:31:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).mds e11 print_map
                                           e11
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-27T08:30:43.081071+0000
                                           modified        2026-01-27T08:31:28.689926+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jocsot{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2515891021,v1:192.168.122.102:6805/2515891021] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ceuaum{-1:14415} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/631311113,v1:192.168.122.100:6807/631311113] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.taxacd{-1:24155} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] compat {c=[1],r=[1],i=[7ff]}]
Jan 27 08:31:31 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] up:standby
Jan 27 08:31:31 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:31 compute-0 ceph-mon[74357]: 7.4 scrub starts
Jan 27 08:31:31 compute-0 ceph-mon[74357]: 7.4 scrub ok
Jan 27 08:31:31 compute-0 ceph-mon[74357]: pgmap v145: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s wr, 6 op/s
Jan 27 08:31:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:32.011773376 +0000 UTC m=+0.045570883 container create d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:31:32 compute-0 systemd[1]: Started libpod-conmon-d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b.scope.
Jan 27 08:31:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:32.081029318 +0000 UTC m=+0.114826845 container init d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:31.994307499 +0000 UTC m=+0.028104996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:32.091282856 +0000 UTC m=+0.125080353 container start d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:31:32 compute-0 magical_curran[97237]: 167 167
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:32.094928071 +0000 UTC m=+0.128725598 container attach d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:31:32 compute-0 systemd[1]: libpod-d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b.scope: Deactivated successfully.
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:32.095568658 +0000 UTC m=+0.129366145 container died d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-914ce4d66f6b99bfdd492f91b02f13f156e4f73935c4bd2e567203393e225eef-merged.mount: Deactivated successfully.
Jan 27 08:31:32 compute-0 podman[97221]: 2026-01-27 08:31:32.130392708 +0000 UTC m=+0.164190205 container remove d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:31:32 compute-0 systemd[1]: libpod-conmon-d7fd1306f0b362612b59ad94f9cae9d7b8853837432cb2f63fb43a979035722b.scope: Deactivated successfully.
Jan 27 08:31:32 compute-0 podman[97261]: 2026-01-27 08:31:32.353234757 +0000 UTC m=+0.065963026 container create 698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ride, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:32 compute-0 systemd[1]: Started libpod-conmon-698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab.scope.
Jan 27 08:31:32 compute-0 podman[97261]: 2026-01-27 08:31:32.327389921 +0000 UTC m=+0.040118270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e314ef8af8ad2535bcbbc18162b94d6221db31159ccd84a5a22842f5daf0cb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e314ef8af8ad2535bcbbc18162b94d6221db31159ccd84a5a22842f5daf0cb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e314ef8af8ad2535bcbbc18162b94d6221db31159ccd84a5a22842f5daf0cb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e314ef8af8ad2535bcbbc18162b94d6221db31159ccd84a5a22842f5daf0cb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 compute-0 podman[97261]: 2026-01-27 08:31:32.444188497 +0000 UTC m=+0.156916776 container init 698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:32 compute-0 podman[97261]: 2026-01-27 08:31:32.451444896 +0000 UTC m=+0.164173155 container start 698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:32 compute-0 podman[97261]: 2026-01-27 08:31:32.453939681 +0000 UTC m=+0.166667940 container attach 698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:31:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 27 08:31:32 compute-0 ceph-mon[74357]: 7.5 scrub starts
Jan 27 08:31:32 compute-0 ceph-mon[74357]: 7.5 scrub ok
Jan 27 08:31:32 compute-0 ceph-mon[74357]: 4.1 scrub starts
Jan 27 08:31:32 compute-0 ceph-mon[74357]: 4.1 scrub ok
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mds.? [v2:192.168.122.101:6804/2722093302,v1:192.168.122.101:6805/2722093302] up:standby
Jan 27 08:31:32 compute-0 ceph-mon[74357]: fsmap cephfs:1 {0=cephfs.compute-2.jocsot=up:active} 2 up:standby
Jan 27 08:31:32 compute-0 ceph-mon[74357]: 3.1a scrub starts
Jan 27 08:31:32 compute-0 ceph-mon[74357]: 3.1a scrub ok
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 27 08:31:32 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.950522423s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.154541016s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.950465202s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.154541016s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1f( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252916336s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.457015991s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.13( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252524376s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456863403s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.13( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252468109s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456863403s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.11( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252226830s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456848145s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1f( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252838135s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.457015991s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.11( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252178192s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456848145s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.10( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252393723s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.457000732s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.956274033s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161132812s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.10( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.252157211s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.457000732s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.956243515s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161132812s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.b( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.251724243s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456863403s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.14( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.251971245s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.457122803s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.b( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.251696587s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456863403s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.14( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.251929283s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.457122803s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.955710411s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161071777s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.955637932s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161071777s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.8( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.251242638s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456848145s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.8( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.251220703s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456848145s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.955403328s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161163330s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.9( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.250894547s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456680298s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.955375671s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161163330s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.6( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.250492096s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456680298s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.6( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.250420570s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456680298s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.9( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.250669479s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456680298s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954405785s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.160644531s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.5( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.250711441s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.457214355s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.5( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.250646591s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.457214355s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954951286s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161224365s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.2( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.249771118s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456527710s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954480171s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161224365s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.2( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.249740601s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456527710s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954303741s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161270142s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.e( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.249262810s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456283569s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954259872s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161270142s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.e( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.249236107s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456283569s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.18( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.249076843s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456298828s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.18( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.249045372s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456298828s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.15( v 55'51 (0'0,55'51] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954047203s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=55'49 lcod 55'50 mlcod 55'50 active pruub 132.161331177s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.15( v 55'51 (0'0,55'51] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953987122s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=55'49 lcod 55'50 mlcod 0'0 unknown NOTIFY pruub 132.161331177s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.3( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.248919487s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456405640s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.3( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.248847008s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456405640s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.4( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.248390198s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456100464s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.a( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.248190880s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.455963135s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.4( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.248352051s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456100464s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.a( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.248159409s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.455963135s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953542709s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161529541s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.f( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.247858047s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.455902100s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953508377s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161529541s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.f( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.247822762s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.455902100s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953267097s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161499023s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953232765s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161499023s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953249931s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161590576s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953183174s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161590576s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953420639s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161911011s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.952919960s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.160644531s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.953383446s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161911011s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.16( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.247107506s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.455703735s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.16( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.247077942s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.455703735s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.14( v 55'51 (0'0,55'51] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.952943802s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=55'49 lcod 55'50 mlcod 55'50 active pruub 132.161621094s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.14( v 55'51 (0'0,55'51] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.952897072s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=55'49 lcod 55'50 mlcod 0'0 unknown NOTIFY pruub 132.161621094s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954731941s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.163696289s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.954707146s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.163696289s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1b( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.247826576s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456848145s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1b( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.247791290s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456848145s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.3( v 55'51 (0'0,55'51] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.952370644s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=55'49 lcod 55'50 mlcod 55'50 active pruub 132.161300659s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1d( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.241312981s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.450729370s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1d( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.241268158s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.450729370s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.952096939s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161682129s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.952069283s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161682129s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.951868057s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 132.161727905s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.3( v 55'51 (0'0,55'51] local-lis/les=54/55 n=1 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.951846123s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=55'49 lcod 55'50 mlcod 0'0 unknown NOTIFY pruub 132.161300659s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=54/55 n=0 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=9.951814651s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.161727905s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1e( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.245789528s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active pruub 135.456054688s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.16( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[7.1e( empty local-lis/les=50/51 n=0 ec=50/24 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.245739937s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.456054688s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.18( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.11( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.10( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.7( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.15( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.2( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.1c( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.9( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.1b( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.1f( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.f( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[5.1( empty local-lis/les=0/0 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.12( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.14( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.17( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.10( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.1b( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.1b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.1c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.1d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.18( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.1e( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.1( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.4( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.5( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.8( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.14( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[4.c( empty local-lis/les=0/0 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.7( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[11.1a( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.4( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.19( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:32 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 56 pg[8.12( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:33 compute-0 sudo[97283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:33 compute-0 sudo[97283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:33 compute-0 sudo[97283]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:33 compute-0 sudo[97309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:33 compute-0 sudo[97309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:33 compute-0 sudo[97309]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]: {
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:     "0": [
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:         {
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "devices": [
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "/dev/loop3"
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             ],
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "lv_name": "ceph_lv0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "lv_size": "7511998464",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "name": "ceph_lv0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "tags": {
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.cluster_name": "ceph",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.crush_device_class": "",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.encrypted": "0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.osd_id": "0",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.type": "block",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:                 "ceph.vdo": "0"
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             },
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "type": "block",
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:             "vg_name": "ceph_vg0"
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:         }
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]:     ]
Jan 27 08:31:33 compute-0 xenodochial_ride[97278]: }
Jan 27 08:31:33 compute-0 systemd[1]: libpod-698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab.scope: Deactivated successfully.
Jan 27 08:31:33 compute-0 podman[97261]: 2026-01-27 08:31:33.248107823 +0000 UTC m=+0.960836122 container died 698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 08:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e314ef8af8ad2535bcbbc18162b94d6221db31159ccd84a5a22842f5daf0cb8-merged.mount: Deactivated successfully.
Jan 27 08:31:33 compute-0 podman[97261]: 2026-01-27 08:31:33.300824872 +0000 UTC m=+1.013553131 container remove 698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:31:33 compute-0 systemd[1]: libpod-conmon-698960e21a463a215dafa365650fef4b001d025a89658549ba6c179268f9e8ab.scope: Deactivated successfully.
Jan 27 08:31:33 compute-0 sudo[97156]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:33 compute-0 sudo[97350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:33 compute-0 sudo[97350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:33 compute-0 sudo[97350]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:33 compute-0 sudo[97375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:33 compute-0 sudo[97375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:33 compute-0 sudo[97375]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:33 compute-0 sudo[97400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:33 compute-0 sudo[97400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:33 compute-0 sudo[97400]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:33 compute-0 sudo[97425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:31:33 compute-0 sudo[97425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:33.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 27 08:31:33 compute-0 ceph-mon[74357]: 4.2 deep-scrub starts
Jan 27 08:31:33 compute-0 ceph-mon[74357]: 4.2 deep-scrub ok
Jan 27 08:31:33 compute-0 ceph-mon[74357]: pgmap v146: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:31:33 compute-0 ceph-mon[74357]: osdmap e56: 3 total, 3 up, 3 in
Jan 27 08:31:33 compute-0 ceph-mon[74357]: 3.1b scrub starts
Jan 27 08:31:33 compute-0 ceph-mon[74357]: 3.1b scrub ok
Jan 27 08:31:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 27 08:31:33 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.1c( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.1f( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.12( v 38'8 lc 0'0 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.14( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.18( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.1b( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.19( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.c( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.1( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.17( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.f( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.8( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.9( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.4( v 38'8 (0'0,38'8] local-lis/les=56/57 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.1b( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.16( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.18( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=56/57 n=0 ec=48/18 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.10( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[5.11( empty local-lis/les=56/57 n=0 ec=48/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[8.10( v 38'8 (0'0,38'8] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:33 compute-0 podman[97490]: 2026-01-27 08:31:33.976443913 +0000 UTC m=+0.034517654 container create 6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:31:34 compute-0 systemd[1]: Started libpod-conmon-6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db.scope.
Jan 27 08:31:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:34 compute-0 podman[97490]: 2026-01-27 08:31:34.04128928 +0000 UTC m=+0.099363071 container init 6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_raman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:31:34 compute-0 podman[97490]: 2026-01-27 08:31:34.050176412 +0000 UTC m=+0.108250163 container start 6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_raman, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:31:34 compute-0 dazzling_raman[97506]: 167 167
Jan 27 08:31:34 compute-0 systemd[1]: libpod-6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db.scope: Deactivated successfully.
Jan 27 08:31:34 compute-0 podman[97490]: 2026-01-27 08:31:34.055793699 +0000 UTC m=+0.113867490 container attach 6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:34 compute-0 podman[97490]: 2026-01-27 08:31:34.056528808 +0000 UTC m=+0.114602609 container died 6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_raman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:31:34 compute-0 podman[97490]: 2026-01-27 08:31:33.960613519 +0000 UTC m=+0.018687290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-89954d4c4e548de6c119afa313390c70c4742d34d977c29d028f542e4a01e5ff-merged.mount: Deactivated successfully.
Jan 27 08:31:34 compute-0 podman[97490]: 2026-01-27 08:31:34.092973001 +0000 UTC m=+0.151046752 container remove 6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:31:34 compute-0 systemd[1]: libpod-conmon-6a21b3e5e0ce5a3877be52718587db5751ce21ddb4305fcb1feb84a70b43a7db.scope: Deactivated successfully.
Jan 27 08:31:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 27 08:31:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 27 08:31:34 compute-0 podman[97530]: 2026-01-27 08:31:34.254762573 +0000 UTC m=+0.043887699 container create b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:31:34 compute-0 systemd[1]: Started libpod-conmon-b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b.scope.
Jan 27 08:31:34 compute-0 podman[97530]: 2026-01-27 08:31:34.238305332 +0000 UTC m=+0.027430468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7a1b199c1dd795c4e57356edac9f6ea6b5fc7f4a30a4498bced0c82834f945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7a1b199c1dd795c4e57356edac9f6ea6b5fc7f4a30a4498bced0c82834f945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7a1b199c1dd795c4e57356edac9f6ea6b5fc7f4a30a4498bced0c82834f945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7a1b199c1dd795c4e57356edac9f6ea6b5fc7f4a30a4498bced0c82834f945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:34 compute-0 podman[97530]: 2026-01-27 08:31:34.355534179 +0000 UTC m=+0.144659325 container init b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 27 08:31:34 compute-0 podman[97530]: 2026-01-27 08:31:34.363242571 +0000 UTC m=+0.152367677 container start b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:34 compute-0 podman[97530]: 2026-01-27 08:31:34.365964902 +0000 UTC m=+0.155090118 container attach b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:31:34 compute-0 ceph-mgr[74650]: [progress INFO root] Completed event a680b7da-0e5a-4a66-bc38-d4c524fa3cf4 (Global Recovery Event) in 15 seconds
Jan 27 08:31:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 27 08:31:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 27 08:31:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 27 08:31:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 27 08:31:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 27 08:31:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 27 08:31:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:31:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:31:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 27 08:31:34 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 27 08:31:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 27 08:31:34 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 58 pg[6.a( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:34 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 58 pg[6.6( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:34 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 58 pg[6.2( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:34 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 58 pg[6.e( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 27 08:31:35 compute-0 ceph-mon[74357]: 4.4 scrub starts
Jan 27 08:31:35 compute-0 ceph-mon[74357]: 4.4 scrub ok
Jan 27 08:31:35 compute-0 ceph-mon[74357]: osdmap e57: 3 total, 3 up, 3 in
Jan 27 08:31:35 compute-0 ceph-mon[74357]: 7.7 scrub starts
Jan 27 08:31:35 compute-0 ceph-mon[74357]: 7.7 scrub ok
Jan 27 08:31:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 27 08:31:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 27 08:31:35 compute-0 brave_ride[97546]: {
Jan 27 08:31:35 compute-0 brave_ride[97546]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:31:35 compute-0 brave_ride[97546]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:31:35 compute-0 brave_ride[97546]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:31:35 compute-0 brave_ride[97546]:         "osd_id": 0,
Jan 27 08:31:35 compute-0 brave_ride[97546]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:31:35 compute-0 brave_ride[97546]:         "type": "bluestore"
Jan 27 08:31:35 compute-0 brave_ride[97546]:     }
Jan 27 08:31:35 compute-0 brave_ride[97546]: }
Jan 27 08:31:35 compute-0 systemd[1]: libpod-b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b.scope: Deactivated successfully.
Jan 27 08:31:35 compute-0 conmon[97546]: conmon b50e3a84c33088e24a24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b.scope/container/memory.events
Jan 27 08:31:35 compute-0 podman[97530]: 2026-01-27 08:31:35.188829054 +0000 UTC m=+0.977954170 container died b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f7a1b199c1dd795c4e57356edac9f6ea6b5fc7f4a30a4498bced0c82834f945-merged.mount: Deactivated successfully.
Jan 27 08:31:35 compute-0 podman[97530]: 2026-01-27 08:31:35.241099291 +0000 UTC m=+1.030224407 container remove b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ride, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:31:35 compute-0 systemd[1]: libpod-conmon-b50e3a84c33088e24a24931a9af54c1a4c543b3f72f8689473ad2a037903e76b.scope: Deactivated successfully.
Jan 27 08:31:35 compute-0 sudo[97425]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 738a4553-261b-4854-a60e-712ce339d304 does not exist
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 91be9d2a-3110-4b1f-8cad-b7758ea3a98c does not exist
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0f144cd4-540c-4c3d-8faa-5345baa3d5c1 does not exist
Jan 27 08:31:35 compute-0 sudo[97580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:35 compute-0 sudo[97580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:35 compute-0 sudo[97580]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:35 compute-0 sudo[97605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:31:35 compute-0 sudo[97605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:35 compute-0 sudo[97605]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:35.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 27 08:31:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 27 08:31:35 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:31:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 27 08:31:35 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:31:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:35 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:31:35 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:31:35 compute-0 sudo[97630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:35 compute-0 sudo[97630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:35 compute-0 sudo[97630]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 27 08:31:36 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 27 08:31:36 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 59 pg[6.2( empty local-lis/les=58/59 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:36 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 59 pg[6.e( v 53'3 lc 53'1 (0'0,53'3] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=53'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:36 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 59 pg[6.6( v 55'1 lc 0'0 (0'0,55'1] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=55'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:36 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 59 pg[6.a( v 53'1 (0'0,53'1] local-lis/les=58/59 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=53'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:36 compute-0 sudo[97655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:36 compute-0 sudo[97655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:36 compute-0 sudo[97655]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 sudo[97680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:36 compute-0 sudo[97680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:36 compute-0 sudo[97680]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 sudo[97705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:31:36 compute-0 sudo[97705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:36 compute-0 ceph-mon[74357]: 4.7 scrub starts
Jan 27 08:31:36 compute-0 ceph-mon[74357]: 4.7 scrub ok
Jan 27 08:31:36 compute-0 ceph-mon[74357]: pgmap v149: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:31:36 compute-0 ceph-mon[74357]: osdmap e58: 3 total, 3 up, 3 in
Jan 27 08:31:36 compute-0 ceph-mon[74357]: 7.c scrub starts
Jan 27 08:31:36 compute-0 ceph-mon[74357]: 7.c scrub ok
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:31:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:36 compute-0 ceph-mon[74357]: osdmap e59: 3 total, 3 up, 3 in
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.371504147 +0000 UTC m=+0.036172906 container create 1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:36 compute-0 systemd[1]: Started libpod-conmon-1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61.scope.
Jan 27 08:31:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.449588679 +0000 UTC m=+0.114257438 container init 1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.354624985 +0000 UTC m=+0.019293774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.461693217 +0000 UTC m=+0.126361976 container start 1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:31:36 compute-0 sad_meninsky[97761]: 167 167
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.465535907 +0000 UTC m=+0.130204686 container attach 1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:31:36 compute-0 systemd[1]: libpod-1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61.scope: Deactivated successfully.
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.466393399 +0000 UTC m=+0.131062168 container died 1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce817e7d3f4bb1203cbc00806f81ff3c8cae15c3ac5dd41ebdaae3b2c5143795-merged.mount: Deactivated successfully.
Jan 27 08:31:36 compute-0 podman[97745]: 2026-01-27 08:31:36.505220004 +0000 UTC m=+0.169888763 container remove 1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:31:36 compute-0 systemd[1]: libpod-conmon-1a1a3734f2f0ff181aedd2baae72b2e9e0e196a6f16c41c611df1b173c83cd61.scope: Deactivated successfully.
Jan 27 08:31:36 compute-0 sudo[97705]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:36 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vujqxq (monmap changed)...
Jan 27 08:31:36 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vujqxq (monmap changed)...
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vujqxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 27 08:31:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vujqxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 08:31:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:31:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:36 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vujqxq on compute-0
Jan 27 08:31:36 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vujqxq on compute-0
Jan 27 08:31:36 compute-0 sudo[97779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:36 compute-0 sudo[97779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:36 compute-0 sudo[97779]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 1 active+recovery_wait, 5 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 9/215 objects degraded (4.186%); 369 B/s, 2 keys/s, 2 objects/s recovering
Jan 27 08:31:36 compute-0 sudo[97804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:36 compute-0 sudo[97804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:36 compute-0 sudo[97804]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 sudo[97829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:36 compute-0 sudo[97829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:36 compute-0 sudo[97829]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:36 compute-0 sudo[97854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:31:36 compute-0 sudo[97854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.058090846 +0000 UTC m=+0.033460726 container create d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nobel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:37 compute-0 systemd[1]: Started libpod-conmon-d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b.scope.
Jan 27 08:31:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.127760457 +0000 UTC m=+0.103130347 container init d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nobel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.134784321 +0000 UTC m=+0.110154191 container start d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.138242382 +0000 UTC m=+0.113612292 container attach d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:31:37 compute-0 hungry_nobel[97912]: 167 167
Jan 27 08:31:37 compute-0 systemd[1]: libpod-d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b.scope: Deactivated successfully.
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.139603838 +0000 UTC m=+0.114973768 container died d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.044153071 +0000 UTC m=+0.019522971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ab41e823d1ad2560d557252f742b174fb76bc509e1ef7a3e6c84f74f1e5284-merged.mount: Deactivated successfully.
Jan 27 08:31:37 compute-0 ceph-mon[74357]: 4.b scrub starts
Jan 27 08:31:37 compute-0 ceph-mon[74357]: 4.b scrub ok
Jan 27 08:31:37 compute-0 ceph-mon[74357]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 27 08:31:37 compute-0 ceph-mon[74357]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:31:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:37 compute-0 ceph-mon[74357]: Reconfiguring mgr.compute-0.vujqxq (monmap changed)...
Jan 27 08:31:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vujqxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mon[74357]: Reconfiguring daemon mgr.compute-0.vujqxq on compute-0
Jan 27 08:31:37 compute-0 ceph-mon[74357]: pgmap v152: 305 pgs: 1 active+recovery_wait, 5 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 9/215 objects degraded (4.186%); 369 B/s, 2 keys/s, 2 objects/s recovering
Jan 27 08:31:37 compute-0 ceph-mon[74357]: 3.8 scrub starts
Jan 27 08:31:37 compute-0 ceph-mon[74357]: 3.8 scrub ok
Jan 27 08:31:37 compute-0 podman[97896]: 2026-01-27 08:31:37.1786735 +0000 UTC m=+0.154043380 container remove d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nobel, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:37 compute-0 systemd[1]: libpod-conmon-d806897e1465e9a4c9e8311bfce75d28e68c177507abac961059d003bd6b8f4b.scope: Deactivated successfully.
Jan 27 08:31:37 compute-0 sudo[97854]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 27 08:31:37 compute-0 sudo[97932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:37 compute-0 sudo[97932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:37 compute-0 sudo[97932]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:37 compute-0 sudo[97957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:37 compute-0 sudo[97957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:37 compute-0 sudo[97957]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:37 compute-0 sudo[97982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:37 compute-0 sudo[97982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:37 compute-0 sudo[97982]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:37 compute-0 sudo[98007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:31:37 compute-0 sudo[98007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 9/215 objects degraded (4.186%), 5 pgs degraded (PG_DEGRADED)
Jan 27 08:31:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:37.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.750692111 +0000 UTC m=+0.034117613 container create 05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meitner, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:37 compute-0 systemd[1]: Started libpod-conmon-05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc.scope.
Jan 27 08:31:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.801099339 +0000 UTC m=+0.084524861 container init 05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meitner, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.805936235 +0000 UTC m=+0.089361737 container start 05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meitner, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:37 compute-0 sad_meitner[98064]: 167 167
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.809270993 +0000 UTC m=+0.092696515 container attach 05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:37 compute-0 systemd[1]: libpod-05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc.scope: Deactivated successfully.
Jan 27 08:31:37 compute-0 conmon[98064]: conmon 05daffbdc133e6965afb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc.scope/container/memory.events
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.810495315 +0000 UTC m=+0.093920817 container died 05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meitner, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f70b38ec5184c546fb0c704d8aa57166786d2f153085f42c5d57c38b06178b65-merged.mount: Deactivated successfully.
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.736173601 +0000 UTC m=+0.019599123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:37 compute-0 podman[98048]: 2026-01-27 08:31:37.84244615 +0000 UTC m=+0.125871652 container remove 05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:31:37 compute-0 systemd[1]: libpod-conmon-05daffbdc133e6965afb6cfbb166fcae70a65bdf03d65fa681fd0aeb86e028bc.scope: Deactivated successfully.
Jan 27 08:31:37 compute-0 sudo[98007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 27 08:31:37 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 27 08:31:37 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 27 08:31:37 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 27 08:31:37 compute-0 sudo[98083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:37 compute-0 sudo[98083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:37 compute-0 sudo[98083]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:38 compute-0 sudo[98108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:38 compute-0 sudo[98108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:38 compute-0 sudo[98108]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:38 compute-0 sudo[98133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:38 compute-0 sudo[98133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:38 compute-0 sudo[98133]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:38 compute-0 sudo[98158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de
Jan 27 08:31:38 compute-0 sudo[98158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:38 compute-0 ceph-mon[74357]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:38 compute-0 ceph-mon[74357]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 27 08:31:38 compute-0 ceph-mon[74357]: Health check failed: Degraded data redundancy: 9/215 objects degraded (4.186%), 5 pgs degraded (PG_DEGRADED)
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 27 08:31:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.351560976 +0000 UTC m=+0.030269892 container create dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:38 compute-0 systemd[1]: Started libpod-conmon-dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c.scope.
Jan 27 08:31:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.403745422 +0000 UTC m=+0.082454358 container init dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.408991619 +0000 UTC m=+0.087700535 container start dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:31:38 compute-0 great_jennings[98214]: 167 167
Jan 27 08:31:38 compute-0 systemd[1]: libpod-dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c.scope: Deactivated successfully.
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.412902221 +0000 UTC m=+0.091611157 container attach dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.413901207 +0000 UTC m=+0.092610133 container died dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-18909191e95686f8ad21c004c300edd80d2736660bbca41e589d9437c60f37bf-merged.mount: Deactivated successfully.
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.338618299 +0000 UTC m=+0.017327235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:38 compute-0 podman[98198]: 2026-01-27 08:31:38.447205228 +0000 UTC m=+0.125914154 container remove dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:38 compute-0 systemd[1]: libpod-conmon-dd19f2ef2d7f6fe25de09d9b17bcb564fdd0dc8c33ef91425f33b711c11df27c.scope: Deactivated successfully.
Jan 27 08:31:38 compute-0 sudo[98158]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:38 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 27 08:31:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 27 08:31:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 27 08:31:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:31:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:38 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 27 08:31:38 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 27 08:31:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 1 active+recovery_wait, 5 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 9/215 objects degraded (4.186%); 248 B/s, 1 keys/s, 1 objects/s recovering
Jan 27 08:31:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 27 08:31:39 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 27 08:31:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 27 08:31:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:39 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 27 08:31:39 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: Reconfiguring osd.0 (monmap changed)...
Jan 27 08:31:39 compute-0 ceph-mon[74357]: Reconfiguring daemon osd.0 on compute-0
Jan 27 08:31:39 compute-0 ceph-mon[74357]: 7.d scrub starts
Jan 27 08:31:39 compute-0 ceph-mon[74357]: 7.d scrub ok
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 ceph-mon[74357]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:39 compute-0 ceph-mon[74357]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: pgmap v153: 305 pgs: 1 active+recovery_wait, 5 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 9/215 objects degraded (4.186%); 248 B/s, 1 keys/s, 1 objects/s recovering
Jan 27 08:31:39 compute-0 ceph-mon[74357]: 3.0 deep-scrub starts
Jan 27 08:31:39 compute-0 ceph-mon[74357]: 3.0 deep-scrub ok
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 27 08:31:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:39.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:39 compute-0 ceph-mgr[74650]: [progress INFO root] Writing back 21 completed events
Jan 27 08:31:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 27 08:31:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:40 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 27 08:31:40 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 27 08:31:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 27 08:31:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:31:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 27 08:31:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:31:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:40 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 27 08:31:40 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 27 08:31:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 504 B/s, 1 keys/s, 2 objects/s recovering
Jan 27 08:31:40 compute-0 ceph-mon[74357]: Reconfiguring osd.1 (monmap changed)...
Jan 27 08:31:40 compute-0 ceph-mon[74357]: Reconfiguring daemon osd.1 on compute-1
Jan 27 08:31:40 compute-0 ceph-mon[74357]: 4.f scrub starts
Jan 27 08:31:40 compute-0 ceph-mon[74357]: 4.f scrub ok
Jan 27 08:31:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:40 compute-0 ceph-mon[74357]: 2.1b scrub starts
Jan 27 08:31:40 compute-0 ceph-mon[74357]: 2.1b scrub ok
Jan 27 08:31:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:31:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:31:40 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:31:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:40 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Jan 27 08:31:40 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Jan 27 08:31:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 9/215 objects degraded (4.186%), 5 pgs degraded)
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:41.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:41 compute-0 ceph-mon[74357]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 27 08:31:41 compute-0 ceph-mon[74357]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: 4.10 deep-scrub starts
Jan 27 08:31:41 compute-0 ceph-mon[74357]: 4.10 deep-scrub ok
Jan 27 08:31:41 compute-0 ceph-mon[74357]: pgmap v154: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 504 B/s, 1 keys/s, 2 objects/s recovering
Jan 27 08:31:41 compute-0 ceph-mon[74357]: 7.12 deep-scrub starts
Jan 27 08:31:41 compute-0 ceph-mon[74357]: 7.12 deep-scrub ok
Jan 27 08:31:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 9/215 objects degraded (4.186%), 5 pgs degraded)
Jan 27 08:31:41 compute-0 ceph-mon[74357]: Cluster is now healthy
Jan 27 08:31:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:41.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.cbywrc (monmap changed)...
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.cbywrc (monmap changed)...
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.cbywrc on compute-2
Jan 27 08:31:41 compute-0 ceph-mgr[74650]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.cbywrc on compute-2
Jan 27 08:31:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 434 B/s, 1 keys/s, 2 objects/s recovering
Jan 27 08:31:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 27 08:31:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 27 08:31:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 27 08:31:42 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 27 08:31:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 27 08:31:42 compute-0 ceph-mon[74357]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 27 08:31:42 compute-0 ceph-mon[74357]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cbywrc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 27 08:31:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 27 08:31:43 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 27 08:31:43 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 27 08:31:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:31:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:31:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 27 08:31:43 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 27 08:31:43 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 60 pg[6.7( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:43 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 60 pg[6.3( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=56/56 les/c/f=57/58/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:43 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 60 pg[6.f( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:43 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 60 pg[6.b( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:43 compute-0 sudo[98243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:43 compute-0 sudo[98243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:43 compute-0 sudo[98243]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:43 compute-0 sudo[98268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:43 compute-0 sudo[98268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:43 compute-0 sudo[98268]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:43 compute-0 sudo[98293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:43 compute-0 sudo[98293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:43 compute-0 sudo[98293]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:43 compute-0 sudo[98318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:31:43 compute-0 sudo[98318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:43 compute-0 ceph-mon[74357]: Reconfiguring mgr.compute-2.cbywrc (monmap changed)...
Jan 27 08:31:43 compute-0 ceph-mon[74357]: Reconfiguring daemon mgr.compute-2.cbywrc on compute-2
Jan 27 08:31:43 compute-0 ceph-mon[74357]: pgmap v155: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 434 B/s, 1 keys/s, 2 objects/s recovering
Jan 27 08:31:43 compute-0 ceph-mon[74357]: 7.15 scrub starts
Jan 27 08:31:43 compute-0 ceph-mon[74357]: 7.15 scrub ok
Jan 27 08:31:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:31:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:31:43 compute-0 ceph-mon[74357]: osdmap e60: 3 total, 3 up, 3 in
Jan 27 08:31:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:44 compute-0 podman[98415]: 2026-01-27 08:31:44.001698901 +0000 UTC m=+0.051096358 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 27 08:31:44 compute-0 podman[98436]: 2026-01-27 08:31:44.155051862 +0000 UTC m=+0.050791309 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:44 compute-0 podman[98415]: 2026-01-27 08:31:44.159615072 +0000 UTC m=+0.209012519 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 27 08:31:44 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 61 pg[6.b( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=60/61 n=1 ec=50/22 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=53'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:44 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 61 pg[6.3( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=60/61 n=2 ec=50/22 lis/c=56/56 les/c/f=57/58/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=53'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:44 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 61 pg[6.7( v 53'2 lc 53'1 (0'0,53'2] local-lis/les=60/61 n=1 ec=50/22 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=53'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:44 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 61 pg[6.f( v 53'5 lc 53'1 (0'0,53'5] local-lis/les=60/61 n=3 ec=50/22 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=53'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:44 compute-0 podman[98552]: 2026-01-27 08:31:44.60267628 +0000 UTC m=+0.055190165 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:44 compute-0 podman[98552]: 2026-01-27 08:31:44.616255496 +0000 UTC m=+0.068769381 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:31:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 249 B/s, 1 objects/s recovering
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 27 08:31:44 compute-0 podman[98618]: 2026-01-27 08:31:44.793915762 +0000 UTC m=+0.046127428 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793)
Jan 27 08:31:44 compute-0 podman[98618]: 2026-01-27 08:31:44.811251725 +0000 UTC m=+0.063463401 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, version=2.2.4, name=keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 08:31:44 compute-0 sudo[98318]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:45 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 27 08:31:45 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 27 08:31:45 compute-0 ceph-mon[74357]: osdmap e61: 3 total, 3 up, 3 in
Jan 27 08:31:45 compute-0 ceph-mon[74357]: 4.11 scrub starts
Jan 27 08:31:45 compute-0 ceph-mon[74357]: 4.11 scrub ok
Jan 27 08:31:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mon[74357]: pgmap v158: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 249 B/s, 1 objects/s recovering
Jan 27 08:31:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 27 08:31:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 27 08:31:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e53c3714-81bd-4a79-b8bc-218058c94673 does not exist
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 4d1d7f71-19d3-4bc6-90a4-d70a927918dc does not exist
Jan 27 08:31:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 59fbb4a9-2bdd-4a1e-99fe-616c1f4c702f does not exist
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:31:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:31:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:45 compute-0 sudo[98669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:45 compute-0 sudo[98669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:45 compute-0 sudo[98669]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:45 compute-0 sudo[98694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:45 compute-0 sudo[98694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:45 compute-0 sudo[98694]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:45 compute-0 sudo[98719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:45 compute-0 sudo[98719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:45 compute-0 sudo[98719]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:45 compute-0 sudo[98744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:31:45 compute-0 sudo[98744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:45.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:31:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:45.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.874913977 +0000 UTC m=+0.036923377 container create 59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_engelbart, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 08:31:45 compute-0 systemd[1]: Started libpod-conmon-59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570.scope.
Jan 27 08:31:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.927634236 +0000 UTC m=+0.089643656 container init 59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_engelbart, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.932972595 +0000 UTC m=+0.094981985 container start 59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.936431276 +0000 UTC m=+0.098440686 container attach 59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:45 compute-0 naughty_engelbart[98824]: 167 167
Jan 27 08:31:45 compute-0 systemd[1]: libpod-59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570.scope: Deactivated successfully.
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.939375222 +0000 UTC m=+0.101384612 container died 59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.85901554 +0000 UTC m=+0.021024950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ecdabb9716e73bff9cf5ea4212009266594f6e33a07722766d39281f27e7325-merged.mount: Deactivated successfully.
Jan 27 08:31:45 compute-0 podman[98808]: 2026-01-27 08:31:45.982003577 +0000 UTC m=+0.144012987 container remove 59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_engelbart, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:31:45 compute-0 systemd[1]: libpod-conmon-59edb12819b512644eff5ec7020cbb2c76719e8d6f1462119993067f893cb570.scope: Deactivated successfully.
Jan 27 08:31:46 compute-0 podman[98846]: 2026-01-27 08:31:46.142964158 +0000 UTC m=+0.040396468 container create 43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 08:31:46 compute-0 systemd[1]: Started libpod-conmon-43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae.scope.
Jan 27 08:31:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3824e4f4f160a3d619334a2e6f0ba34b293de55ab49653035e1f4adcb6653efd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3824e4f4f160a3d619334a2e6f0ba34b293de55ab49653035e1f4adcb6653efd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3824e4f4f160a3d619334a2e6f0ba34b293de55ab49653035e1f4adcb6653efd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3824e4f4f160a3d619334a2e6f0ba34b293de55ab49653035e1f4adcb6653efd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3824e4f4f160a3d619334a2e6f0ba34b293de55ab49653035e1f4adcb6653efd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:46 compute-0 ceph-mon[74357]: 2.c scrub starts
Jan 27 08:31:46 compute-0 ceph-mon[74357]: 2.c scrub ok
Jan 27 08:31:46 compute-0 ceph-mon[74357]: 7.17 scrub starts
Jan 27 08:31:46 compute-0 ceph-mon[74357]: 7.17 scrub ok
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:31:46 compute-0 ceph-mon[74357]: osdmap e62: 3 total, 3 up, 3 in
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:31:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:31:46 compute-0 podman[98846]: 2026-01-27 08:31:46.12585728 +0000 UTC m=+0.023289600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:46 compute-0 podman[98846]: 2026-01-27 08:31:46.225432824 +0000 UTC m=+0.122865164 container init 43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:31:46 compute-0 podman[98846]: 2026-01-27 08:31:46.235095657 +0000 UTC m=+0.132527967 container start 43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:31:46 compute-0 podman[98846]: 2026-01-27 08:31:46.239106932 +0000 UTC m=+0.136539272 container attach 43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 27 08:31:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 27 08:31:46 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 27 08:31:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 8 remapped+peering, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 27 08:31:47 compute-0 elastic_poitras[98862]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:31:47 compute-0 elastic_poitras[98862]: --> relative data size: 1.0
Jan 27 08:31:47 compute-0 elastic_poitras[98862]: --> All data devices are unavailable
Jan 27 08:31:47 compute-0 systemd[1]: libpod-43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae.scope: Deactivated successfully.
Jan 27 08:31:47 compute-0 podman[98846]: 2026-01-27 08:31:47.030653725 +0000 UTC m=+0.928086035 container died 43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3824e4f4f160a3d619334a2e6f0ba34b293de55ab49653035e1f4adcb6653efd-merged.mount: Deactivated successfully.
Jan 27 08:31:47 compute-0 podman[98846]: 2026-01-27 08:31:47.098079209 +0000 UTC m=+0.995511539 container remove 43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:47 compute-0 systemd[1]: libpod-conmon-43811f267ecb06e69a5a5dc179c23ddca37c569668668ace58a891979104c1ae.scope: Deactivated successfully.
Jan 27 08:31:47 compute-0 sudo[98744]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:47 compute-0 sudo[98891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:47 compute-0 sudo[98891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:47 compute-0 sudo[98891]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:47 compute-0 sudo[98916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:47 compute-0 sudo[98916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:47 compute-0 sudo[98916]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 27 08:31:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 27 08:31:47 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 27 08:31:47 compute-0 ceph-mon[74357]: osdmap e63: 3 total, 3 up, 3 in
Jan 27 08:31:47 compute-0 ceph-mon[74357]: 4.12 scrub starts
Jan 27 08:31:47 compute-0 ceph-mon[74357]: 4.12 scrub ok
Jan 27 08:31:47 compute-0 ceph-mon[74357]: pgmap v161: 305 pgs: 8 remapped+peering, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 27 08:31:47 compute-0 sudo[98941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:47 compute-0 sudo[98941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:47 compute-0 sudo[98941]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:47 compute-0 sudo[98966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:31:47 compute-0 sudo[98966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.703453303 +0000 UTC m=+0.061333505 container create 11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:31:47 compute-0 systemd[1]: Started libpod-conmon-11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5.scope.
Jan 27 08:31:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.685180745 +0000 UTC m=+0.043060987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.779096862 +0000 UTC m=+0.136977134 container init 11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.78939179 +0000 UTC m=+0.147271982 container start 11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:47 compute-0 strange_poincare[99048]: 167 167
Jan 27 08:31:47 compute-0 systemd[1]: libpod-11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5.scope: Deactivated successfully.
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.793128339 +0000 UTC m=+0.151008541 container attach 11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.793393186 +0000 UTC m=+0.151273388 container died 11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6577fd1c9a4894e1b4d59d8b35aaa894b5a0b90eca0f09a1be397b6b328eec8e-merged.mount: Deactivated successfully.
Jan 27 08:31:47 compute-0 podman[99031]: 2026-01-27 08:31:47.832860228 +0000 UTC m=+0.190740430 container remove 11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:47 compute-0 systemd[1]: libpod-conmon-11cde98192b7f4faa18ab4a945a6fbf472f57874c56fbc73e194f90d64f05bd5.scope: Deactivated successfully.
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:48.0003877 +0000 UTC m=+0.055431351 container create 65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:31:48 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 27 08:31:48 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 27 08:31:48 compute-0 systemd[1]: Started libpod-conmon-65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257.scope.
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:47.971526824 +0000 UTC m=+0.026570495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de5bbf4b27d61b959c2514fb6178f277c32828b2fffbb00cacfc62dcd730810/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de5bbf4b27d61b959c2514fb6178f277c32828b2fffbb00cacfc62dcd730810/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de5bbf4b27d61b959c2514fb6178f277c32828b2fffbb00cacfc62dcd730810/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de5bbf4b27d61b959c2514fb6178f277c32828b2fffbb00cacfc62dcd730810/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:48.108449736 +0000 UTC m=+0.163493457 container init 65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:48.117575134 +0000 UTC m=+0.172618775 container start 65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:48.124835145 +0000 UTC m=+0.179878826 container attach 65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:31:48 compute-0 ceph-mon[74357]: osdmap e64: 3 total, 3 up, 3 in
Jan 27 08:31:48 compute-0 ceph-mon[74357]: 4.16 scrub starts
Jan 27 08:31:48 compute-0 ceph-mon[74357]: 4.16 scrub ok
Jan 27 08:31:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 8 remapped+peering, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 141 B/s, 2 keys/s, 1 objects/s recovering
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]: {
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:     "0": [
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:         {
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "devices": [
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "/dev/loop3"
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             ],
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "lv_name": "ceph_lv0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "lv_size": "7511998464",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "name": "ceph_lv0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "tags": {
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.cluster_name": "ceph",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.crush_device_class": "",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.encrypted": "0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.osd_id": "0",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.type": "block",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:                 "ceph.vdo": "0"
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             },
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "type": "block",
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:             "vg_name": "ceph_vg0"
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:         }
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]:     ]
Jan 27 08:31:48 compute-0 romantic_proskuriakova[99088]: }
Jan 27 08:31:48 compute-0 systemd[1]: libpod-65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257.scope: Deactivated successfully.
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:48.903800349 +0000 UTC m=+0.958844010 container died 65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de5bbf4b27d61b959c2514fb6178f277c32828b2fffbb00cacfc62dcd730810-merged.mount: Deactivated successfully.
Jan 27 08:31:48 compute-0 podman[99072]: 2026-01-27 08:31:48.956685492 +0000 UTC m=+1.011729123 container remove 65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:48 compute-0 systemd[1]: libpod-conmon-65253cb0d59944eb6509889fcfb8f754a64ca29e4a23ce54e08bc2fbdc0c5257.scope: Deactivated successfully.
Jan 27 08:31:48 compute-0 sudo[98966]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:49 compute-0 sudo[99109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:49 compute-0 sudo[99109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:49 compute-0 sudo[99109]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:49 compute-0 sudo[99135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:31:49 compute-0 sudo[99135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:49 compute-0 sudo[99135]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:49 compute-0 sudo[99160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:49 compute-0 sudo[99160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:49 compute-0 sudo[99160]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:49 compute-0 sudo[99185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:31:49 compute-0 sudo[99185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:49 compute-0 ceph-mon[74357]: 7.19 scrub starts
Jan 27 08:31:49 compute-0 ceph-mon[74357]: 7.19 scrub ok
Jan 27 08:31:49 compute-0 ceph-mon[74357]: 4.17 scrub starts
Jan 27 08:31:49 compute-0 ceph-mon[74357]: 4.17 scrub ok
Jan 27 08:31:49 compute-0 ceph-mon[74357]: pgmap v163: 305 pgs: 8 remapped+peering, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 141 B/s, 2 keys/s, 1 objects/s recovering
Jan 27 08:31:49 compute-0 ceph-mon[74357]: 2.10 scrub starts
Jan 27 08:31:49 compute-0 ceph-mon[74357]: 2.10 scrub ok
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.591163307 +0000 UTC m=+0.048456008 container create 64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 08:31:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:49 compute-0 systemd[1]: Started libpod-conmon-64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f.scope.
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.565395523 +0000 UTC m=+0.022688254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.683150603 +0000 UTC m=+0.140443334 container init 64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.688983995 +0000 UTC m=+0.146276696 container start 64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_sutherland, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:49 compute-0 cool_sutherland[99265]: 167 167
Jan 27 08:31:49 compute-0 systemd[1]: libpod-64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f.scope: Deactivated successfully.
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.701834592 +0000 UTC m=+0.159127313 container attach 64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.702237112 +0000 UTC m=+0.159529813 container died 64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc1a54e570534a2b72960e9c32d06e3dce344c3935e8cf2107c6a3695b386eb-merged.mount: Deactivated successfully.
Jan 27 08:31:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:49 compute-0 podman[99249]: 2026-01-27 08:31:49.746261494 +0000 UTC m=+0.203554195 container remove 64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:31:49 compute-0 systemd[1]: libpod-conmon-64facf56802ce84785848dd2bd16dd139b7ce21088be354436d410d5bf1ce42f.scope: Deactivated successfully.
Jan 27 08:31:49 compute-0 podman[99289]: 2026-01-27 08:31:49.947551499 +0000 UTC m=+0.069119209 container create e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:31:50 compute-0 podman[99289]: 2026-01-27 08:31:49.904624306 +0000 UTC m=+0.026192026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:31:50 compute-0 systemd[1]: Started libpod-conmon-e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a.scope.
Jan 27 08:31:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a12bb8e408a700f449d57e295280d5e441a5b33088a389065dae0e52b5ee8e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a12bb8e408a700f449d57e295280d5e441a5b33088a389065dae0e52b5ee8e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a12bb8e408a700f449d57e295280d5e441a5b33088a389065dae0e52b5ee8e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a12bb8e408a700f449d57e295280d5e441a5b33088a389065dae0e52b5ee8e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:50 compute-0 podman[99289]: 2026-01-27 08:31:50.070470803 +0000 UTC m=+0.192038513 container init e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:50 compute-0 podman[99289]: 2026-01-27 08:31:50.076960664 +0000 UTC m=+0.198528364 container start e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 27 08:31:50 compute-0 podman[99289]: 2026-01-27 08:31:50.121338985 +0000 UTC m=+0.242906675 container attach e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:31:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 307 B/s, 1 keys/s, 7 objects/s recovering
Jan 27 08:31:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 27 08:31:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 27 08:31:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 27 08:31:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 27 08:31:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 27 08:31:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:31:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:31:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 27 08:31:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 27 08:31:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 27 08:31:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 27 08:31:50 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 65 pg[6.5( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=56/56 les/c/f=57/58/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:50 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 65 pg[6.d( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=56/56 les/c/f=57/58/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:50 compute-0 interesting_cerf[99307]: {
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:         "osd_id": 0,
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:         "type": "bluestore"
Jan 27 08:31:50 compute-0 interesting_cerf[99307]:     }
Jan 27 08:31:50 compute-0 interesting_cerf[99307]: }
Jan 27 08:31:50 compute-0 systemd[1]: libpod-e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a.scope: Deactivated successfully.
Jan 27 08:31:50 compute-0 podman[99289]: 2026-01-27 08:31:50.896504999 +0000 UTC m=+1.018072729 container died e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a12bb8e408a700f449d57e295280d5e441a5b33088a389065dae0e52b5ee8e2-merged.mount: Deactivated successfully.
Jan 27 08:31:50 compute-0 podman[99289]: 2026-01-27 08:31:50.949973277 +0000 UTC m=+1.071541017 container remove e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:50 compute-0 systemd[1]: libpod-conmon-e2d0e8ee1dc50e34471af3902ae6b724f18538b6cc90ea1675d1d78511a3549a.scope: Deactivated successfully.
Jan 27 08:31:50 compute-0 sudo[99185]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:31:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:31:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7ef2cfa2-38ee-4971-9109-d49415c8fe53 does not exist
Jan 27 08:31:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 76cf06cc-f744-4322-befc-612240b86a00 does not exist
Jan 27 08:31:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3b585bd8-3c76-4470-ae12-e86ba9d9b547 does not exist
Jan 27 08:31:51 compute-0 sudo[99340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:51 compute-0 sudo[99340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:51 compute-0 sudo[99340]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:51 compute-0 sudo[99365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:31:51 compute-0 sudo[99365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:51 compute-0 sudo[99365]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 27 08:31:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 27 08:31:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 27 08:31:51 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 66 pg[6.5( v 53'3 lc 53'1 (0'0,53'3] local-lis/les=65/66 n=2 ec=50/22 lis/c=56/56 les/c/f=57/58/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 crt=53'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:51 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 66 pg[6.d( v 53'3 lc 53'1 (0'0,53'3] local-lis/les=65/66 n=2 ec=50/22 lis/c=56/56 les/c/f=57/58/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 crt=53'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:31:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:31:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:51 compute-0 ceph-mon[74357]: pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 307 B/s, 1 keys/s, 7 objects/s recovering
Jan 27 08:31:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:31:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:31:51 compute-0 ceph-mon[74357]: osdmap e65: 3 total, 3 up, 3 in
Jan 27 08:31:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:31:51 compute-0 ceph-mon[74357]: osdmap e66: 3 total, 3 up, 3 in
Jan 27 08:31:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 27 08:31:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 27 08:31:52 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 27 08:31:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 222 B/s, 7 objects/s recovering
Jan 27 08:31:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 27 08:31:52 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 27 08:31:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 27 08:31:52 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 27 08:31:52 compute-0 ceph-mon[74357]: 4.1e scrub starts
Jan 27 08:31:52 compute-0 ceph-mon[74357]: 4.1e scrub ok
Jan 27 08:31:52 compute-0 ceph-mon[74357]: 2.15 scrub starts
Jan 27 08:31:52 compute-0 ceph-mon[74357]: 2.15 scrub ok
Jan 27 08:31:52 compute-0 ceph-mon[74357]: osdmap e67: 3 total, 3 up, 3 in
Jan 27 08:31:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 27 08:31:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 27 08:31:53 compute-0 sudo[99391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:53 compute-0 sudo[99391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:53 compute-0 sudo[99391]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:53 compute-0 sudo[99416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:31:53 compute-0 sudo[99416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:31:53 compute-0 sudo[99416]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 27 08:31:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:31:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:31:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 27 08:31:53 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=68) [0] r=0 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=68) [0] r=0 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=68) [0] r=0 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=68) [0] r=0 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[6.e( v 53'3 (0'0,53'3] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=68 pruub=14.586414337s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=53'3 mlcod 53'3 active pruub 157.491439819s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[6.e( v 53'3 (0'0,53'3] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=68 pruub=14.586368561s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=53'3 mlcod 0'0 unknown NOTIFY pruub 157.491439819s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[6.6( v 55'1 (0'0,55'1] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=68 pruub=14.586061478s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=55'1 mlcod 55'1 active pruub 157.491485596s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:53 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 68 pg[6.6( v 55'1 (0'0,55'1] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=68 pruub=14.586010933s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=55'1 mlcod 0'0 unknown NOTIFY pruub 157.491485596s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:53 compute-0 ceph-mon[74357]: pgmap v168: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 222 B/s, 7 objects/s recovering
Jan 27 08:31:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:31:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:31:53 compute-0 ceph-mon[74357]: osdmap e68: 3 total, 3 up, 3 in
Jan 27 08:31:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 27 08:31:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 27 08:31:54 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:54 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=69) [0]/[1] r=-1 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:31:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 27 08:31:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 27 08:31:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 27 08:31:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 27 08:31:54 compute-0 ceph-mon[74357]: 6.4 scrub starts
Jan 27 08:31:54 compute-0 ceph-mon[74357]: 6.4 scrub ok
Jan 27 08:31:54 compute-0 ceph-mon[74357]: osdmap e69: 3 total, 3 up, 3 in
Jan 27 08:31:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 27 08:31:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 27 08:31:54 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 27 08:31:54 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 27 08:31:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 27 08:31:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:31:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:31:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 27 08:31:55 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 27 08:31:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:31:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:55.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:31:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:55 compute-0 ceph-mon[74357]: 6.8 scrub starts
Jan 27 08:31:55 compute-0 ceph-mon[74357]: pgmap v171: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:31:55 compute-0 ceph-mon[74357]: 6.8 scrub ok
Jan 27 08:31:55 compute-0 ceph-mon[74357]: 7.1a scrub starts
Jan 27 08:31:55 compute-0 ceph-mon[74357]: 7.1a scrub ok
Jan 27 08:31:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:31:55 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:31:55 compute-0 ceph-mon[74357]: osdmap e70: 3 total, 3 up, 3 in
Jan 27 08:31:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 27 08:31:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 27 08:31:56 compute-0 sudo[99465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqzivxwcuagxgxkkpyureohdvthhixf ; /usr/bin/python3'
Jan 27 08:31:56 compute-0 sudo[99465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 27 08:31:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 27 08:31:56 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.6( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.6( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.e( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.e( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:31:56 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 71 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:31:56 compute-0 python3[99467]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.585015349 +0000 UTC m=+0.041410898 container create 1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d (image=quay.io/ceph/ceph:v18, name=ecstatic_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:31:56 compute-0 systemd[1]: Started libpod-conmon-1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d.scope.
Jan 27 08:31:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dc5a800c4cbba5b9b81d31a3c90400099c658cbbc6ae8253a52d4e6b028edc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dc5a800c4cbba5b9b81d31a3c90400099c658cbbc6ae8253a52d4e6b028edc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.661806675 +0000 UTC m=+0.118202314 container init 1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d (image=quay.io/ceph/ceph:v18, name=ecstatic_galileo, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.567822211 +0000 UTC m=+0.024217790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.670617852 +0000 UTC m=+0.127013421 container start 1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d (image=quay.io/ceph/ceph:v18, name=ecstatic_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.674187903 +0000 UTC m=+0.130583542 container attach 1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d (image=quay.io/ceph/ceph:v18, name=ecstatic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:31:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 188 B/s, 7 objects/s recovering
Jan 27 08:31:56 compute-0 ceph-mon[74357]: 6.c scrub starts
Jan 27 08:31:56 compute-0 ceph-mon[74357]: 6.c scrub ok
Jan 27 08:31:56 compute-0 ceph-mon[74357]: 7.1c scrub starts
Jan 27 08:31:56 compute-0 ceph-mon[74357]: 7.1c scrub ok
Jan 27 08:31:56 compute-0 ceph-mon[74357]: osdmap e71: 3 total, 3 up, 3 in
Jan 27 08:31:56 compute-0 ceph-mon[74357]: pgmap v174: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 188 B/s, 7 objects/s recovering
Jan 27 08:31:56 compute-0 ecstatic_galileo[99484]: could not fetch user info: no user info saved
Jan 27 08:31:56 compute-0 systemd[1]: libpod-1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d.scope: Deactivated successfully.
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.893054971 +0000 UTC m=+0.349450600 container died 1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d (image=quay.io/ceph/ceph:v18, name=ecstatic_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1dc5a800c4cbba5b9b81d31a3c90400099c658cbbc6ae8253a52d4e6b028edc-merged.mount: Deactivated successfully.
Jan 27 08:31:56 compute-0 podman[99468]: 2026-01-27 08:31:56.934414477 +0000 UTC m=+0.390810036 container remove 1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d (image=quay.io/ceph/ceph:v18, name=ecstatic_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:56 compute-0 systemd[1]: libpod-conmon-1d89a124239f0fc54f52a3ee1fc9c36cca5a22106ce34e481028daee8d58cd1d.scope: Deactivated successfully.
Jan 27 08:31:56 compute-0 sudo[99465]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:57 compute-0 sudo[99604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptoksajzvueiaptbwjcconlkermrbhkh ; /usr/bin/python3'
Jan 27 08:31:57 compute-0 sudo[99604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:31:57 compute-0 python3[99606]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.369854339 +0000 UTC m=+0.066797504 container create 5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3 (image=quay.io/ceph/ceph:v18, name=reverent_babbage, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:31:57 compute-0 systemd[1]: Started libpod-conmon-5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3.scope.
Jan 27 08:31:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.341531462 +0000 UTC m=+0.038474667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 27 08:31:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 27 08:31:57 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 27 08:31:57 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 72 pg[9.e( v 45'998 (0'0,45'998] local-lis/les=71/72 n=6 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:31:57 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 72 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:57 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 72 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:57 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 72 pg[9.6( v 45'998 (0'0,45'998] local-lis/les=71/72 n=6 ec=52/39 lis/c=69/52 les/c/f=70/53/0 sis=71) [0] r=0 lpr=71 pi=[52,71)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cccac2c402faae74ab3bffa5aaefe926cfda34a2ec96685b48f623f78269b1f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cccac2c402faae74ab3bffa5aaefe926cfda34a2ec96685b48f623f78269b1f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.469509972 +0000 UTC m=+0.166453187 container init 5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3 (image=quay.io/ceph/ceph:v18, name=reverent_babbage, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.479198185 +0000 UTC m=+0.176141350 container start 5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3 (image=quay.io/ceph/ceph:v18, name=reverent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.483525011 +0000 UTC m=+0.180468176 container attach 5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3 (image=quay.io/ceph/ceph:v18, name=reverent_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:57 compute-0 reverent_babbage[99622]: {
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "user_id": "openstack",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "display_name": "openstack",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "email": "",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "suspended": 0,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "max_buckets": 1000,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "subusers": [],
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "keys": [
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         {
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:             "user": "openstack",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:             "access_key": "J07IEEGW6MY47E81FRLQ",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:             "secret_key": "demSlLReFlGJSL1zFnRC5ZNIjYDGsGxYLkiBzDQt"
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         }
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     ],
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "swift_keys": [],
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "caps": [],
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "op_mask": "read, write, delete",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "default_placement": "",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "default_storage_class": "",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "placement_tags": [],
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "bucket_quota": {
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "enabled": false,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "check_on_raw": false,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "max_size": -1,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "max_size_kb": 0,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "max_objects": -1
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     },
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "user_quota": {
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "enabled": false,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "check_on_raw": false,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "max_size": -1,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "max_size_kb": 0,
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:         "max_objects": -1
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     },
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "temp_url_keys": [],
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "type": "rgw",
Jan 27 08:31:57 compute-0 reverent_babbage[99622]:     "mfa_ids": []
Jan 27 08:31:57 compute-0 reverent_babbage[99622]: }
Jan 27 08:31:57 compute-0 reverent_babbage[99622]: 
Jan 27 08:31:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:57 compute-0 systemd[1]: libpod-5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3.scope: Deactivated successfully.
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.764100122 +0000 UTC m=+0.461043237 container died 5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3 (image=quay.io/ceph/ceph:v18, name=reverent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cccac2c402faae74ab3bffa5aaefe926cfda34a2ec96685b48f623f78269b1f-merged.mount: Deactivated successfully.
Jan 27 08:31:57 compute-0 podman[99607]: 2026-01-27 08:31:57.796444745 +0000 UTC m=+0.493387870 container remove 5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3 (image=quay.io/ceph/ceph:v18, name=reverent_babbage, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:57 compute-0 systemd[1]: libpod-conmon-5b03d975a231576d13323a4ed8794f2004cf315ffcbafc9b6589eac063e97af3.scope: Deactivated successfully.
Jan 27 08:31:57 compute-0 sudo[99604]: pam_unix(sudo:session): session closed for user root
Jan 27 08:31:57 compute-0 ceph-mon[74357]: 2.d scrub starts
Jan 27 08:31:57 compute-0 ceph-mon[74357]: 2.d scrub ok
Jan 27 08:31:57 compute-0 ceph-mon[74357]: osdmap e72: 3 total, 3 up, 3 in
Jan 27 08:31:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 177 B/s, 7 objects/s recovering
Jan 27 08:31:58 compute-0 ceph-mon[74357]: pgmap v176: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 177 B/s, 7 objects/s recovering
Jan 27 08:31:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:31:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:31:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:31:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:31:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:31:59 compute-0 ceph-mon[74357]: 8.1 scrub starts
Jan 27 08:31:59 compute-0 ceph-mon[74357]: 8.1 scrub ok
Jan 27 08:31:59 compute-0 ceph-mon[74357]: 2.a deep-scrub starts
Jan 27 08:31:59 compute-0 ceph-mon[74357]: 2.a deep-scrub ok
Jan 27 08:32:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1 op/s; 125 B/s, 4 objects/s recovering
Jan 27 08:32:00 compute-0 ceph-mon[74357]: 8.7 scrub starts
Jan 27 08:32:00 compute-0 ceph-mon[74357]: 8.7 scrub ok
Jan 27 08:32:00 compute-0 ceph-mon[74357]: 2.13 scrub starts
Jan 27 08:32:00 compute-0 ceph-mon[74357]: 2.13 scrub ok
Jan 27 08:32:00 compute-0 ceph-mon[74357]: pgmap v177: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1 op/s; 125 B/s, 4 objects/s recovering
Jan 27 08:32:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:01.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1 op/s; 52 B/s, 3 objects/s recovering
Jan 27 08:32:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 27 08:32:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 27 08:32:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 27 08:32:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 27 08:32:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 27 08:32:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:32:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:32:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 27 08:32:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 27 08:32:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 27 08:32:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 27 08:32:02 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 73 pg[6.8( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=73) [0] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:03.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 27 08:32:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 27 08:32:03 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 27 08:32:03 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 74 pg[6.8( empty local-lis/les=73/74 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=73) [0] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:03 compute-0 ceph-mon[74357]: pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1 op/s; 52 B/s, 3 objects/s recovering
Jan 27 08:32:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:32:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:32:03 compute-0 ceph-mon[74357]: osdmap e73: 3 total, 3 up, 3 in
Jan 27 08:32:03 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 27 08:32:03 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 27 08:32:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1 op/s; 30 B/s, 2 objects/s recovering
Jan 27 08:32:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 27 08:32:04 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 27 08:32:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 27 08:32:04 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 27 08:32:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 27 08:32:04 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:32:04 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:32:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 27 08:32:04 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 27 08:32:04 compute-0 ceph-mon[74357]: 8.e scrub starts
Jan 27 08:32:04 compute-0 ceph-mon[74357]: 8.e scrub ok
Jan 27 08:32:04 compute-0 ceph-mon[74357]: osdmap e74: 3 total, 3 up, 3 in
Jan 27 08:32:04 compute-0 ceph-mon[74357]: 10.6 scrub starts
Jan 27 08:32:04 compute-0 ceph-mon[74357]: 10.6 scrub ok
Jan 27 08:32:04 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 27 08:32:04 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 27 08:32:04 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:32:04 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:32:04 compute-0 ceph-mon[74357]: osdmap e75: 3 total, 3 up, 3 in
Jan 27 08:32:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 27 08:32:05 compute-0 ceph-mon[74357]: pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1 op/s; 30 B/s, 2 objects/s recovering
Jan 27 08:32:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 27 08:32:05 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 27 08:32:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 2 unknown, 2 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 27 08:32:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 27 08:32:06 compute-0 ceph-mon[74357]: 8.13 deep-scrub starts
Jan 27 08:32:06 compute-0 ceph-mon[74357]: 8.13 deep-scrub ok
Jan 27 08:32:06 compute-0 ceph-mon[74357]: osdmap e76: 3 total, 3 up, 3 in
Jan 27 08:32:06 compute-0 ceph-mon[74357]: pgmap v184: 305 pgs: 2 unknown, 2 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:06 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 27 08:32:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 27 08:32:07 compute-0 ceph-mon[74357]: osdmap e77: 3 total, 3 up, 3 in
Jan 27 08:32:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 27 08:32:07 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 27 08:32:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 2 unknown, 2 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 27 08:32:08 compute-0 ceph-mon[74357]: 8.1a scrub starts
Jan 27 08:32:08 compute-0 ceph-mon[74357]: 8.1a scrub ok
Jan 27 08:32:08 compute-0 ceph-mon[74357]: osdmap e78: 3 total, 3 up, 3 in
Jan 27 08:32:08 compute-0 ceph-mon[74357]: pgmap v187: 305 pgs: 2 unknown, 2 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 27 08:32:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 27 08:32:08 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 27 08:32:09 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 27 08:32:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:09 compute-0 ceph-mon[74357]: osdmap e79: 3 total, 3 up, 3 in
Jan 27 08:32:09 compute-0 ceph-mon[74357]: 10.7 scrub starts
Jan 27 08:32:09 compute-0 ceph-mon[74357]: 10.7 scrub ok
Jan 27 08:32:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 421 B/s wr, 29 op/s; 113 B/s, 4 objects/s recovering
Jan 27 08:32:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 27 08:32:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 27 08:32:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 27 08:32:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 27 08:32:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 27 08:32:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:32:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:32:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 27 08:32:10 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 27 08:32:10 compute-0 ceph-mon[74357]: 5.4 scrub starts
Jan 27 08:32:10 compute-0 ceph-mon[74357]: 5.4 scrub ok
Jan 27 08:32:10 compute-0 ceph-mon[74357]: pgmap v189: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 421 B/s wr, 29 op/s; 113 B/s, 4 objects/s recovering
Jan 27 08:32:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 27 08:32:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 27 08:32:10 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 80 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=80) [0] r=0 lpr=80 pi=[52,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:10 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 80 pg[9.a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=80) [0] r=0 lpr=80 pi=[52,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 27 08:32:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 27 08:32:11 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 27 08:32:11 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 81 pg[9.a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:11 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 81 pg[9.a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:11 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 81 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:11 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 81 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:32:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:32:11 compute-0 ceph-mon[74357]: osdmap e80: 3 total, 3 up, 3 in
Jan 27 08:32:11 compute-0 ceph-mon[74357]: osdmap e81: 3 total, 3 up, 3 in
Jan 27 08:32:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 27 08:32:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 27 08:32:12 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 27 08:32:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 511 B/s wr, 35 op/s; 137 B/s, 5 objects/s recovering
Jan 27 08:32:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 27 08:32:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 27 08:32:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 27 08:32:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 27 08:32:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 27 08:32:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 27 08:32:13 compute-0 sudo[99727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:13 compute-0 sudo[99727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:13 compute-0 sudo[99727]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:13 compute-0 sudo[99752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:13 compute-0 sudo[99752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:13 compute-0 sudo[99752]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 27 08:32:13 compute-0 ceph-mon[74357]: osdmap e82: 3 total, 3 up, 3 in
Jan 27 08:32:13 compute-0 ceph-mon[74357]: pgmap v193: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 511 B/s wr, 35 op/s; 137 B/s, 5 objects/s recovering
Jan 27 08:32:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 27 08:32:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 27 08:32:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:32:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:32:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 27 08:32:13 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 27 08:32:13 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 83 pg[9.a( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=81/52 les/c/f=82/53/0 sis=83) [0] r=0 lpr=83 pi=[52,83)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:13 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 83 pg[9.a( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=81/52 les/c/f=82/53/0 sis=83) [0] r=0 lpr=83 pi=[52,83)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:13 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 83 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=81/52 les/c/f=82/53/0 sis=83) [0] r=0 lpr=83 pi=[52,83)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:13 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 83 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=81/52 les/c/f=82/53/0 sis=83) [0] r=0 lpr=83 pi=[52,83)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:13 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 83 pg[6.b( v 53'3 (0'0,53'3] local-lis/les=60/61 n=1 ec=50/22 lis/c=60/60 les/c/f=61/61/0 sis=83 pruub=10.630340576s) [1] r=-1 lpr=83 pi=[60,83)/1 crt=53'3 mlcod 53'3 active pruub 173.646514893s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:13 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 83 pg[6.b( v 53'3 (0'0,53'3] local-lis/les=60/61 n=1 ec=50/22 lis/c=60/60 les/c/f=61/61/0 sis=83 pruub=10.630254745s) [1] r=-1 lpr=83 pi=[60,83)/1 crt=53'3 mlcod 0'0 unknown NOTIFY pruub 173.646514893s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:14 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 27 08:32:14 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 27 08:32:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 27 08:32:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 27 08:32:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 27 08:32:14 compute-0 ceph-mon[74357]: 10.9 scrub starts
Jan 27 08:32:14 compute-0 ceph-mon[74357]: 10.9 scrub ok
Jan 27 08:32:14 compute-0 ceph-mon[74357]: 8.1d scrub starts
Jan 27 08:32:14 compute-0 ceph-mon[74357]: 8.1d scrub ok
Jan 27 08:32:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:32:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:32:14 compute-0 ceph-mon[74357]: osdmap e83: 3 total, 3 up, 3 in
Jan 27 08:32:14 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 84 pg[9.a( v 45'998 (0'0,45'998] local-lis/les=83/84 n=6 ec=52/39 lis/c=81/52 les/c/f=82/53/0 sis=83) [0] r=0 lpr=83 pi=[52,83)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:14 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 84 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=83/84 n=5 ec=52/39 lis/c=81/52 les/c/f=82/53/0 sis=83) [0] r=0 lpr=83 pi=[52,83)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 27 08:32:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 27 08:32:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 27 08:32:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 27 08:32:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:32:14
Jan 27 08:32:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:32:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:32:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Jan 27 08:32:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:32:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:32:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 27 08:32:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:32:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:32:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 27 08:32:15 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 27 08:32:15 compute-0 ceph-mon[74357]: 10.a scrub starts
Jan 27 08:32:15 compute-0 ceph-mon[74357]: 10.a scrub ok
Jan 27 08:32:15 compute-0 ceph-mon[74357]: osdmap e84: 3 total, 3 up, 3 in
Jan 27 08:32:15 compute-0 ceph-mon[74357]: pgmap v196: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 27 08:32:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 27 08:32:15 compute-0 ceph-mon[74357]: 5.8 scrub starts
Jan 27 08:32:15 compute-0 ceph-mon[74357]: 5.8 scrub ok
Jan 27 08:32:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:15.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:16 compute-0 ceph-mon[74357]: 8.1e deep-scrub starts
Jan 27 08:32:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:32:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:32:16 compute-0 ceph-mon[74357]: osdmap e85: 3 total, 3 up, 3 in
Jan 27 08:32:16 compute-0 ceph-mon[74357]: 8.1e deep-scrub ok
Jan 27 08:32:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 3 objects/s recovering
Jan 27 08:32:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 27 08:32:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 27 08:32:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 27 08:32:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 27 08:32:16 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 27 08:32:16 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 27 08:32:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 27 08:32:17 compute-0 ceph-mon[74357]: 9.1 scrub starts
Jan 27 08:32:17 compute-0 ceph-mon[74357]: 9.1 scrub ok
Jan 27 08:32:17 compute-0 ceph-mon[74357]: pgmap v198: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 3 objects/s recovering
Jan 27 08:32:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 27 08:32:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 27 08:32:17 compute-0 ceph-mon[74357]: 5.b scrub starts
Jan 27 08:32:17 compute-0 ceph-mon[74357]: 5.b scrub ok
Jan 27 08:32:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:32:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:32:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 27 08:32:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 27 08:32:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 86 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 86 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:18 compute-0 ceph-mon[74357]: 10.b scrub starts
Jan 27 08:32:18 compute-0 ceph-mon[74357]: 10.b scrub ok
Jan 27 08:32:18 compute-0 ceph-mon[74357]: 9.2 scrub starts
Jan 27 08:32:18 compute-0 ceph-mon[74357]: 9.2 scrub ok
Jan 27 08:32:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:32:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:32:18 compute-0 ceph-mon[74357]: osdmap e86: 3 total, 3 up, 3 in
Jan 27 08:32:18 compute-0 ceph-mon[74357]: 5.d scrub starts
Jan 27 08:32:18 compute-0 ceph-mon[74357]: 5.d scrub ok
Jan 27 08:32:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 27 08:32:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 27 08:32:18 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 27 08:32:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=87) [0]/[2] r=-1 lpr=87 pi=[68,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=87) [0]/[2] r=-1 lpr=87 pi=[68,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=87) [0]/[2] r=-1 lpr=87 pi=[68,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=87) [0]/[2] r=-1 lpr=87 pi=[68,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 53 B/s, 3 objects/s recovering
Jan 27 08:32:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 27 08:32:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 27 08:32:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 27 08:32:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 27 08:32:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 27 08:32:19 compute-0 ceph-mon[74357]: osdmap e87: 3 total, 3 up, 3 in
Jan 27 08:32:19 compute-0 ceph-mon[74357]: pgmap v201: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 53 B/s, 3 objects/s recovering
Jan 27 08:32:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 27 08:32:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 27 08:32:19 compute-0 ceph-mon[74357]: 5.e deep-scrub starts
Jan 27 08:32:19 compute-0 ceph-mon[74357]: 5.e deep-scrub ok
Jan 27 08:32:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:19.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:19 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:32:19 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:32:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 27 08:32:19 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 27 08:32:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 88 pg[6.e( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=68/68 les/c/f=69/69/0 sis=88) [0] r=0 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:19 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 27 08:32:19 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 27 08:32:20 compute-0 sshd-session[99780]: Accepted publickey for zuul from 192.168.122.30 port 37934 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:32:20 compute-0 systemd-logind[799]: New session 34 of user zuul.
Jan 27 08:32:20 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 27 08:32:20 compute-0 sshd-session[99780]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:32:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 27 08:32:20 compute-0 ceph-mon[74357]: 9.4 scrub starts
Jan 27 08:32:20 compute-0 ceph-mon[74357]: 9.4 scrub ok
Jan 27 08:32:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:32:20 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:32:20 compute-0 ceph-mon[74357]: osdmap e88: 3 total, 3 up, 3 in
Jan 27 08:32:20 compute-0 ceph-mon[74357]: 10.c scrub starts
Jan 27 08:32:20 compute-0 ceph-mon[74357]: 10.c scrub ok
Jan 27 08:32:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 27 08:32:20 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 27 08:32:20 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 89 pg[9.d( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=87/68 les/c/f=88/69/0 sis=89) [0] r=0 lpr=89 pi=[68,89)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:20 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 89 pg[9.d( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=87/68 les/c/f=88/69/0 sis=89) [0] r=0 lpr=89 pi=[68,89)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:20 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 89 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=87/68 les/c/f=88/69/0 sis=89) [0] r=0 lpr=89 pi=[68,89)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:20 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 89 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=87/68 les/c/f=88/69/0 sis=89) [0] r=0 lpr=89 pi=[68,89)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:20 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 89 pg[6.e( v 53'3 lc 53'1 (0'0,53'3] local-lis/les=88/89 n=1 ec=50/22 lis/c=68/68 les/c/f=69/69/0 sis=88) [0] r=0 lpr=88 pi=[68,88)/1 crt=53'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 27 08:32:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 27 08:32:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 27 08:32:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 27 08:32:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 27 08:32:21 compute-0 python3.9[99933]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:32:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 27 08:32:21 compute-0 ceph-mon[74357]: 9.c scrub starts
Jan 27 08:32:21 compute-0 ceph-mon[74357]: 9.c scrub ok
Jan 27 08:32:21 compute-0 ceph-mon[74357]: osdmap e89: 3 total, 3 up, 3 in
Jan 27 08:32:21 compute-0 ceph-mon[74357]: pgmap v204: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 27 08:32:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 27 08:32:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 27 08:32:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:32:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:32:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 27 08:32:21 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 27 08:32:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 90 pg[6.f( v 53'5 (0'0,53'5] local-lis/les=60/61 n=3 ec=50/22 lis/c=60/60 les/c/f=61/61/0 sis=90 pruub=10.466776848s) [1] r=-1 lpr=90 pi=[60,90)/1 crt=53'5 mlcod 53'5 active pruub 181.649353027s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 90 pg[6.f( v 53'5 (0'0,53'5] local-lis/les=60/61 n=3 ec=50/22 lis/c=60/60 les/c/f=61/61/0 sis=90 pruub=10.466709137s) [1] r=-1 lpr=90 pi=[60,90)/1 crt=53'5 mlcod 0'0 unknown NOTIFY pruub 181.649353027s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 90 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=63/63 les/c/f=64/64/0 sis=90) [0] r=0 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 90 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=63/63 les/c/f=64/64/0 sis=90) [0] r=0 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 90 pg[9.d( v 45'998 (0'0,45'998] local-lis/les=89/90 n=6 ec=52/39 lis/c=87/68 les/c/f=88/69/0 sis=89) [0] r=0 lpr=89 pi=[68,89)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 90 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=89/90 n=5 ec=52/39 lis/c=87/68 les/c/f=88/69/0 sis=89) [0] r=0 lpr=89 pi=[68,89)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:22 compute-0 sudo[100146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aokyuwnurqpuwnougjlhlulzvexkizdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502742.0478542-56-163954451801260/AnsiballZ_command.py'
Jan 27 08:32:22 compute-0 sudo[100146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:32:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 81 B/s, 3 objects/s recovering
Jan 27 08:32:22 compute-0 python3.9[100148]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:32:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 27 08:32:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 27 08:32:22 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 27 08:32:22 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 91 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=63/63 les/c/f=64/64/0 sis=91) [0]/[2] r=-1 lpr=91 pi=[63,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:22 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 91 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=63/63 les/c/f=64/64/0 sis=91) [0]/[2] r=-1 lpr=91 pi=[63,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:22 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 91 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=63/63 les/c/f=64/64/0 sis=91) [0]/[2] r=-1 lpr=91 pi=[63,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:22 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 91 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=63/63 les/c/f=64/64/0 sis=91) [0]/[2] r=-1 lpr=91 pi=[63,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:22 compute-0 ceph-mon[74357]: 9.10 scrub starts
Jan 27 08:32:22 compute-0 ceph-mon[74357]: 9.10 scrub ok
Jan 27 08:32:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:32:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:32:22 compute-0 ceph-mon[74357]: osdmap e90: 3 total, 3 up, 3 in
Jan 27 08:32:22 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 27 08:32:23 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 27 08:32:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 27 08:32:23 compute-0 ceph-mon[74357]: 9.11 scrub starts
Jan 27 08:32:23 compute-0 ceph-mon[74357]: 9.11 scrub ok
Jan 27 08:32:23 compute-0 ceph-mon[74357]: pgmap v206: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 81 B/s, 3 objects/s recovering
Jan 27 08:32:23 compute-0 ceph-mon[74357]: osdmap e91: 3 total, 3 up, 3 in
Jan 27 08:32:23 compute-0 ceph-mon[74357]: 10.d scrub starts
Jan 27 08:32:23 compute-0 ceph-mon[74357]: 10.d scrub ok
Jan 27 08:32:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 27 08:32:23 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 27 08:32:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.724886004094547e-06 of space, bias 1.0, pg target 0.002017465801228364 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:32:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 27 08:32:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 27 08:32:24 compute-0 ceph-mon[74357]: osdmap e92: 3 total, 3 up, 3 in
Jan 27 08:32:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 27 08:32:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 27 08:32:24 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 93 pg[9.f( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=91/63 les/c/f=92/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:24 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 93 pg[9.f( v 45'998 (0'0,45'998] local-lis/les=0/0 n=6 ec=52/39 lis/c=91/63 les/c/f=92/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:24 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 93 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=91/63 les/c/f=92/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:24 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 93 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=91/63 les/c/f=92/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 27 08:32:25 compute-0 ceph-mon[74357]: 9.12 scrub starts
Jan 27 08:32:25 compute-0 ceph-mon[74357]: 9.12 scrub ok
Jan 27 08:32:25 compute-0 ceph-mon[74357]: pgmap v209: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 27 08:32:25 compute-0 ceph-mon[74357]: osdmap e93: 3 total, 3 up, 3 in
Jan 27 08:32:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 27 08:32:25 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 27 08:32:25 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 94 pg[9.f( v 45'998 (0'0,45'998] local-lis/les=93/94 n=6 ec=52/39 lis/c=91/63 les/c/f=92/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:25 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 94 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=93/94 n=5 ec=52/39 lis/c=91/63 les/c/f=92/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 205 B/s, 3 objects/s recovering
Jan 27 08:32:26 compute-0 ceph-mon[74357]: osdmap e94: 3 total, 3 up, 3 in
Jan 27 08:32:26 compute-0 ceph-mon[74357]: pgmap v212: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 205 B/s, 3 objects/s recovering
Jan 27 08:32:26 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 27 08:32:26 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 27 08:32:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:27.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:27 compute-0 ceph-mon[74357]: 9.14 deep-scrub starts
Jan 27 08:32:27 compute-0 ceph-mon[74357]: 9.14 deep-scrub ok
Jan 27 08:32:27 compute-0 ceph-mon[74357]: 10.e scrub starts
Jan 27 08:32:27 compute-0 ceph-mon[74357]: 10.e scrub ok
Jan 27 08:32:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 137 B/s, 2 objects/s recovering
Jan 27 08:32:28 compute-0 ceph-mon[74357]: pgmap v213: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 137 B/s, 2 objects/s recovering
Jan 27 08:32:29 compute-0 sudo[100146]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:29 compute-0 sshd-session[99783]: Connection closed by 192.168.122.30 port 37934
Jan 27 08:32:29 compute-0 sshd-session[99780]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:32:29 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 27 08:32:29 compute-0 systemd[1]: session-34.scope: Consumed 8.083s CPU time.
Jan 27 08:32:29 compute-0 systemd-logind[799]: Session 34 logged out. Waiting for processes to exit.
Jan 27 08:32:29 compute-0 systemd-logind[799]: Removed session 34.
Jan 27 08:32:29 compute-0 ceph-mon[74357]: 5.12 scrub starts
Jan 27 08:32:29 compute-0 ceph-mon[74357]: 5.12 scrub ok
Jan 27 08:32:29 compute-0 ceph-mon[74357]: 5.13 scrub starts
Jan 27 08:32:29 compute-0 ceph-mon[74357]: 5.13 scrub ok
Jan 27 08:32:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Jan 27 08:32:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Jan 27 08:32:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 118 B/s, 2 objects/s recovering
Jan 27 08:32:31 compute-0 ceph-mon[74357]: 10.16 deep-scrub starts
Jan 27 08:32:31 compute-0 ceph-mon[74357]: 10.16 deep-scrub ok
Jan 27 08:32:31 compute-0 ceph-mon[74357]: pgmap v214: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 118 B/s, 2 objects/s recovering
Jan 27 08:32:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:32 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Jan 27 08:32:32 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Jan 27 08:32:32 compute-0 ceph-mon[74357]: 10.17 deep-scrub starts
Jan 27 08:32:32 compute-0 ceph-mon[74357]: 10.17 deep-scrub ok
Jan 27 08:32:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 102 B/s, 1 objects/s recovering
Jan 27 08:32:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 27 08:32:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 27 08:32:32 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 27 08:32:32 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 27 08:32:33 compute-0 sudo[100212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:33 compute-0 sudo[100212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:33 compute-0 sudo[100212]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:33 compute-0 sudo[100237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:33 compute-0 sudo[100237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:33 compute-0 sudo[100237]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 27 08:32:33 compute-0 ceph-mon[74357]: 9.1c scrub starts
Jan 27 08:32:33 compute-0 ceph-mon[74357]: 9.1c scrub ok
Jan 27 08:32:33 compute-0 ceph-mon[74357]: pgmap v215: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 102 B/s, 1 objects/s recovering
Jan 27 08:32:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 27 08:32:33 compute-0 ceph-mon[74357]: 10.1a scrub starts
Jan 27 08:32:33 compute-0 ceph-mon[74357]: 10.1a scrub ok
Jan 27 08:32:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 27 08:32:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 27 08:32:33 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 27 08:32:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:33.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 27 08:32:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 27 08:32:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 27 08:32:34 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=95) [0] r=0 lpr=95 pi=[52,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 27 08:32:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 27 08:32:34 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 27 08:32:34 compute-0 ceph-mon[74357]: 11.2 scrub starts
Jan 27 08:32:34 compute-0 ceph-mon[74357]: 11.2 scrub ok
Jan 27 08:32:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 27 08:32:34 compute-0 ceph-mon[74357]: osdmap e95: 3 total, 3 up, 3 in
Jan 27 08:32:34 compute-0 ceph-mon[74357]: 5.1a deep-scrub starts
Jan 27 08:32:34 compute-0 ceph-mon[74357]: 5.1a deep-scrub ok
Jan 27 08:32:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 27 08:32:34 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 96 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=96) [0] r=0 lpr=96 pi=[52,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:35.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 27 08:32:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 27 08:32:35 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 27 08:32:35 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 97 pg[9.10( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[52,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:35 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 97 pg[9.10( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[52,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:35 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 97 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[52,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:35 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 97 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[52,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:35 compute-0 ceph-mon[74357]: pgmap v217: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 27 08:32:35 compute-0 ceph-mon[74357]: osdmap e96: 3 total, 3 up, 3 in
Jan 27 08:32:35 compute-0 ceph-mon[74357]: osdmap e97: 3 total, 3 up, 3 in
Jan 27 08:32:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:35 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 27 08:32:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 27 08:32:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 27 08:32:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 27 08:32:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 27 08:32:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 27 08:32:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 27 08:32:36 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 27 08:32:36 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 98 pg[9.12( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=98) [0] r=0 lpr=98 pi=[52,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:36 compute-0 ceph-mon[74357]: 10.12 scrub starts
Jan 27 08:32:36 compute-0 ceph-mon[74357]: 10.12 scrub ok
Jan 27 08:32:36 compute-0 ceph-mon[74357]: 10.1c scrub starts
Jan 27 08:32:36 compute-0 ceph-mon[74357]: 10.1c scrub ok
Jan 27 08:32:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 27 08:32:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 27 08:32:36 compute-0 ceph-mon[74357]: osdmap e98: 3 total, 3 up, 3 in
Jan 27 08:32:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 27 08:32:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 27 08:32:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 27 08:32:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:38 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 27 08:32:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 27 08:32:38 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 27 08:32:38 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 99 pg[9.10( v 45'998 (0'0,45'998] local-lis/les=0/0 n=2 ec=52/39 lis/c=97/52 les/c/f=98/53/0 sis=99) [0] r=0 lpr=99 pi=[52,99)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:38 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 99 pg[9.11( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=97/52 les/c/f=98/53/0 sis=99) [0] r=0 lpr=99 pi=[52,99)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:38 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 99 pg[9.10( v 45'998 (0'0,45'998] local-lis/les=0/0 n=2 ec=52/39 lis/c=97/52 les/c/f=98/53/0 sis=99) [0] r=0 lpr=99 pi=[52,99)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:38 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 99 pg[9.11( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=97/52 les/c/f=98/53/0 sis=99) [0] r=0 lpr=99 pi=[52,99)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:38 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 99 pg[9.12( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=99) [0]/[1] r=-1 lpr=99 pi=[52,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:38 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 99 pg[9.12( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=99) [0]/[1] r=-1 lpr=99 pi=[52,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:38 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 27 08:32:38 compute-0 ceph-mon[74357]: pgmap v220: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:38 compute-0 ceph-mon[74357]: 10.1d scrub starts
Jan 27 08:32:38 compute-0 ceph-mon[74357]: 10.1d scrub ok
Jan 27 08:32:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 27 08:32:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 27 08:32:39 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 27 08:32:39 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 27 08:32:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 27 08:32:39 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 27 08:32:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 27 08:32:39 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 27 08:32:39 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 100 pg[9.10( v 45'998 (0'0,45'998] local-lis/les=99/100 n=2 ec=52/39 lis/c=97/52 les/c/f=98/53/0 sis=99) [0] r=0 lpr=99 pi=[52,99)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:39 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 100 pg[9.11( v 45'998 (0'0,45'998] local-lis/les=99/100 n=5 ec=52/39 lis/c=97/52 les/c/f=98/53/0 sis=99) [0] r=0 lpr=99 pi=[52,99)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:39 compute-0 ceph-mon[74357]: 10.1f scrub starts
Jan 27 08:32:39 compute-0 ceph-mon[74357]: osdmap e99: 3 total, 3 up, 3 in
Jan 27 08:32:39 compute-0 ceph-mon[74357]: 10.1f scrub ok
Jan 27 08:32:39 compute-0 ceph-mon[74357]: pgmap v223: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:39 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 27 08:32:39 compute-0 ceph-mon[74357]: 5.18 scrub starts
Jan 27 08:32:39 compute-0 ceph-mon[74357]: 5.18 scrub ok
Jan 27 08:32:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 27 08:32:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:39.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 27 08:32:40 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 27 08:32:40 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 27 08:32:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 27 08:32:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 27 08:32:40 compute-0 ceph-mon[74357]: osdmap e100: 3 total, 3 up, 3 in
Jan 27 08:32:40 compute-0 ceph-mon[74357]: 5.11 scrub starts
Jan 27 08:32:40 compute-0 ceph-mon[74357]: 5.11 scrub ok
Jan 27 08:32:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 27 08:32:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 101 pg[9.12( v 45'998 (0'0,45'998] local-lis/les=0/0 n=4 ec=52/39 lis/c=99/52 les/c/f=100/53/0 sis=101) [0] r=0 lpr=101 pi=[52,101)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:40 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 27 08:32:40 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 101 pg[9.12( v 45'998 (0'0,45'998] local-lis/les=0/0 n=4 ec=52/39 lis/c=99/52 les/c/f=100/53/0 sis=101) [0] r=0 lpr=101 pi=[52,101)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 27 08:32:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 27 08:32:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 27 08:32:41 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 27 08:32:41 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 27 08:32:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 27 08:32:41 compute-0 ceph-mon[74357]: osdmap e101: 3 total, 3 up, 3 in
Jan 27 08:32:41 compute-0 ceph-mon[74357]: pgmap v226: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 27 08:32:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 27 08:32:41 compute-0 ceph-mon[74357]: 7.1f scrub starts
Jan 27 08:32:41 compute-0 ceph-mon[74357]: 7.1f scrub ok
Jan 27 08:32:41 compute-0 ceph-mon[74357]: 5.10 scrub starts
Jan 27 08:32:41 compute-0 ceph-mon[74357]: 5.10 scrub ok
Jan 27 08:32:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 27 08:32:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 27 08:32:41 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 27 08:32:41 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 102 pg[9.12( v 45'998 (0'0,45'998] local-lis/les=101/102 n=4 ec=52/39 lis/c=99/52 les/c/f=100/53/0 sis=101) [0] r=0 lpr=101 pi=[52,101)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 27 08:32:42 compute-0 ceph-mon[74357]: osdmap e102: 3 total, 3 up, 3 in
Jan 27 08:32:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 2 objects/s recovering
Jan 27 08:32:43 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Jan 27 08:32:43 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Jan 27 08:32:43 compute-0 ceph-mon[74357]: pgmap v228: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 2 objects/s recovering
Jan 27 08:32:43 compute-0 ceph-mon[74357]: 5.16 deep-scrub starts
Jan 27 08:32:43 compute-0 ceph-mon[74357]: 5.16 deep-scrub ok
Jan 27 08:32:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:32:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:43.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:32:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:44 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 27 08:32:44 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 27 08:32:44 compute-0 ceph-mon[74357]: 11.6 deep-scrub starts
Jan 27 08:32:44 compute-0 ceph-mon[74357]: 11.6 deep-scrub ok
Jan 27 08:32:44 compute-0 ceph-mon[74357]: 5.7 scrub starts
Jan 27 08:32:44 compute-0 ceph-mon[74357]: 5.7 scrub ok
Jan 27 08:32:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 27 08:32:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:45 compute-0 sshd-session[100269]: Accepted publickey for zuul from 192.168.122.30 port 53510 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:32:45 compute-0 systemd-logind[799]: New session 35 of user zuul.
Jan 27 08:32:45 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 27 08:32:45 compute-0 sshd-session[100269]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:32:45 compute-0 ceph-mon[74357]: 7.11 scrub starts
Jan 27 08:32:45 compute-0 ceph-mon[74357]: 7.11 scrub ok
Jan 27 08:32:45 compute-0 ceph-mon[74357]: pgmap v229: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 27 08:32:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:46 compute-0 python3.9[100422]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 08:32:46 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Jan 27 08:32:46 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Jan 27 08:32:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:46 compute-0 ceph-mon[74357]: 5.15 deep-scrub starts
Jan 27 08:32:46 compute-0 ceph-mon[74357]: 5.15 deep-scrub ok
Jan 27 08:32:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 29 B/s, 1 objects/s recovering
Jan 27 08:32:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 27 08:32:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 27 08:32:47 compute-0 python3.9[100597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:32:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 27 08:32:47 compute-0 ceph-mon[74357]: pgmap v230: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 29 B/s, 1 objects/s recovering
Jan 27 08:32:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 27 08:32:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 27 08:32:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 27 08:32:47 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 27 08:32:47 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 103 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:48 compute-0 sudo[100751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgczyxrbqwepyxfiwgmglgsdyukbuxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502767.8713028-93-59063265093061/AnsiballZ_command.py'
Jan 27 08:32:48 compute-0 sudo[100751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:32:48 compute-0 python3.9[100753]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:32:48 compute-0 sudo[100751]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 27 08:32:48 compute-0 ceph-mon[74357]: 11.9 scrub starts
Jan 27 08:32:48 compute-0 ceph-mon[74357]: 11.9 scrub ok
Jan 27 08:32:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 27 08:32:48 compute-0 ceph-mon[74357]: osdmap e103: 3 total, 3 up, 3 in
Jan 27 08:32:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 27 08:32:48 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 27 08:32:48 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[68,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:48 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=68/68 les/c/f=69/69/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[68,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 27 08:32:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 27 08:32:49 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Jan 27 08:32:49 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Jan 27 08:32:49 compute-0 sudo[100905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glikbntfvennxqnhchroluirhkbsfvog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502768.9350169-129-82659599001333/AnsiballZ_stat.py'
Jan 27 08:32:49 compute-0 sudo[100905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:32:49 compute-0 python3.9[100907]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:32:49 compute-0 sudo[100905]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 27 08:32:49 compute-0 ceph-mon[74357]: 7.14 deep-scrub starts
Jan 27 08:32:49 compute-0 ceph-mon[74357]: 7.14 deep-scrub ok
Jan 27 08:32:49 compute-0 ceph-mon[74357]: osdmap e104: 3 total, 3 up, 3 in
Jan 27 08:32:49 compute-0 ceph-mon[74357]: pgmap v233: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:49 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 27 08:32:49 compute-0 ceph-mon[74357]: 5.2 deep-scrub starts
Jan 27 08:32:49 compute-0 ceph-mon[74357]: 5.2 deep-scrub ok
Jan 27 08:32:49 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 27 08:32:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 27 08:32:49 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 27 08:32:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:49.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:49 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 105 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=105 pruub=11.760350227s) [2] r=-1 lpr=105 pi=[71,105)/1 crt=45'998 mlcod 0'0 active pruub 210.911956787s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:49 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 105 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=105 pruub=11.760296822s) [2] r=-1 lpr=105 pi=[71,105)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 210.911956787s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:50 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 27 08:32:50 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 27 08:32:50 compute-0 sudo[101059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdgyqbichzjqtmichvzlytuorrkhnqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502769.9634624-162-96952669947270/AnsiballZ_file.py'
Jan 27 08:32:50 compute-0 sudo[101059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:32:50 compute-0 python3.9[101061]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:32:50 compute-0 sudo[101059]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 27 08:32:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 27 08:32:50 compute-0 ceph-mon[74357]: osdmap e105: 3 total, 3 up, 3 in
Jan 27 08:32:50 compute-0 ceph-mon[74357]: 5.9 scrub starts
Jan 27 08:32:50 compute-0 ceph-mon[74357]: 5.9 scrub ok
Jan 27 08:32:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 27 08:32:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 27 08:32:50 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 106 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=106) [2]/[0] r=0 lpr=106 pi=[71,106)/1 crt=45'998 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:50 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 106 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=106) [2]/[0] r=0 lpr=106 pi=[71,106)/1 crt=45'998 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:50 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 106 pg[9.15( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=104/68 les/c/f=105/69/0 sis=106) [0] r=0 lpr=106 pi=[68,106)/1 luod=0'0 crt=45'998 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:50 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 106 pg[9.15( v 45'998 (0'0,45'998] local-lis/les=0/0 n=5 ec=52/39 lis/c=104/68 les/c/f=105/69/0 sis=106) [0] r=0 lpr=106 pi=[68,106)/1 crt=45'998 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:32:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:51 compute-0 sudo[101212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tahdeabxjigtcgleyslndzsyyqfgpzkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502771.1563344-189-154922600086507/AnsiballZ_file.py'
Jan 27 08:32:51 compute-0 sudo[101212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:32:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:51 compute-0 sudo[101213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:51 compute-0 sudo[101213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:51 compute-0 sudo[101213]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:51 compute-0 sudo[101240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:32:51 compute-0 sudo[101240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:51 compute-0 sudo[101240]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:51 compute-0 python3.9[101221]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:32:51 compute-0 sudo[101265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:51 compute-0 sudo[101265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:51 compute-0 sudo[101265]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:51 compute-0 sudo[101212]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:51 compute-0 sudo[101290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:32:51 compute-0 sudo[101290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:32:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:51.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:32:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 27 08:32:51 compute-0 ceph-mon[74357]: osdmap e106: 3 total, 3 up, 3 in
Jan 27 08:32:51 compute-0 ceph-mon[74357]: pgmap v236: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 27 08:32:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 27 08:32:51 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 107 pg[9.15( v 45'998 (0'0,45'998] local-lis/les=106/107 n=5 ec=52/39 lis/c=104/68 les/c/f=105/69/0 sis=106) [0] r=0 lpr=106 pi=[68,106)/1 crt=45'998 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:51 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 107 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=106/107 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=106) [2]/[0] async=[2] r=0 lpr=106 pi=[71,106)/1 crt=45'998 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:32:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:52 compute-0 podman[101462]: 2026-01-27 08:32:52.140049972 +0000 UTC m=+0.064085883 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:52 compute-0 podman[101462]: 2026-01-27 08:32:52.242210101 +0000 UTC m=+0.166246012 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:52 compute-0 python3.9[101583]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:32:52 compute-0 network[101656]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:32:52 compute-0 network[101657]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:32:52 compute-0 network[101658]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:32:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 27 08:32:52 compute-0 podman[101694]: 2026-01-27 08:32:52.765272052 +0000 UTC m=+0.052622297 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:32:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 27 08:32:52 compute-0 ceph-mon[74357]: osdmap e107: 3 total, 3 up, 3 in
Jan 27 08:32:52 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 108 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=106/107 n=5 ec=52/39 lis/c=106/71 les/c/f=107/72/0 sis=108 pruub=14.956543922s) [2] async=[2] r=-1 lpr=108 pi=[71,108)/1 crt=45'998 mlcod 45'998 active pruub 217.207977295s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:32:52 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 108 pg[9.16( v 45'998 (0'0,45'998] local-lis/les=106/107 n=5 ec=52/39 lis/c=106/71 les/c/f=107/72/0 sis=108 pruub=14.955676079s) [2] r=-1 lpr=108 pi=[71,108)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 217.207977295s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:32:52 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 27 08:32:52 compute-0 podman[101694]: 2026-01-27 08:32:52.802297111 +0000 UTC m=+0.089647346 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 podman[101767]: 2026-01-27 08:32:53.43330685 +0000 UTC m=+0.047088066 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9)
Jan 27 08:32:53 compute-0 podman[101791]: 2026-01-27 08:32:53.49803853 +0000 UTC m=+0.047844717 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, version=2.2.4, description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Jan 27 08:32:53 compute-0 podman[101767]: 2026-01-27 08:32:53.504708803 +0000 UTC m=+0.118489989 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 08:32:53 compute-0 sudo[101811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:53 compute-0 sudo[101811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:53 compute-0 sudo[101811]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:53 compute-0 sudo[101290]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:32:53 compute-0 sudo[101855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:53 compute-0 sudo[101855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:53 compute-0 sudo[101855]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:53.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:53 compute-0 sudo[101883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:53 compute-0 sudo[101883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:53 compute-0 sudo[101883]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:53 compute-0 sudo[101911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:32:53 compute-0 sudo[101911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:53 compute-0 sudo[101911]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 27 08:32:53 compute-0 ceph-mon[74357]: pgmap v238: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:53 compute-0 ceph-mon[74357]: osdmap e108: 3 total, 3 up, 3 in
Jan 27 08:32:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 27 08:32:53 compute-0 sudo[101940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 27 08:32:53 compute-0 sudo[101940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:53 compute-0 sudo[101940]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:32:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:32:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:32:53 compute-0 sudo[101969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:32:53 compute-0 sudo[101969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:54 compute-0 sudo[101969]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:32:54 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:32:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:32:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:54 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0c6289ed-8d92-41bd-b03b-b1ae2fd5681b does not exist
Jan 27 08:32:54 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 12d81d8b-1b6d-4cff-a8a8-1c279893d5c4 does not exist
Jan 27 08:32:54 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1e050790-f425-4b38-a771-55a41042251e does not exist
Jan 27 08:32:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:32:54 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:32:54 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:32:54 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:32:54 compute-0 sudo[102077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:54 compute-0 sudo[102077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:54 compute-0 sudo[102077]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:54 compute-0 sudo[102105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:32:54 compute-0 sudo[102105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:54 compute-0 sudo[102105]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:54 compute-0 ceph-mon[74357]: 10.4 scrub starts
Jan 27 08:32:54 compute-0 ceph-mon[74357]: 10.4 scrub ok
Jan 27 08:32:54 compute-0 ceph-mon[74357]: 11.b scrub starts
Jan 27 08:32:54 compute-0 ceph-mon[74357]: 11.b scrub ok
Jan 27 08:32:54 compute-0 ceph-mon[74357]: osdmap e109: 3 total, 3 up, 3 in
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:32:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:32:54 compute-0 sudo[102134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:54 compute-0 sudo[102134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:54 compute-0 sudo[102134]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:54 compute-0 sudo[102162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:32:54 compute-0 sudo[102162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.161351972 +0000 UTC m=+0.036421072 container create ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:55 compute-0 systemd[1]: Started libpod-conmon-ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2.scope.
Jan 27 08:32:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.239294374 +0000 UTC m=+0.114363504 container init ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.146786481 +0000 UTC m=+0.021855601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.247298814 +0000 UTC m=+0.122367914 container start ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.250593286 +0000 UTC m=+0.125662406 container attach ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:55 compute-0 competent_dirac[102275]: 167 167
Jan 27 08:32:55 compute-0 systemd[1]: libpod-ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2.scope: Deactivated successfully.
Jan 27 08:32:55 compute-0 conmon[102275]: conmon ae0457e2902c3b1db854 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2.scope/container/memory.events
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.254172014 +0000 UTC m=+0.129241114 container died ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 27 08:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f62faeaa81073dd774e4deac536b1c83712fc823078cb5d7304d61e715f17cf-merged.mount: Deactivated successfully.
Jan 27 08:32:55 compute-0 podman[102258]: 2026-01-27 08:32:55.297174126 +0000 UTC m=+0.172243226 container remove ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Jan 27 08:32:55 compute-0 systemd[1]: libpod-conmon-ae0457e2902c3b1db85436330960da06018d9e189956995c09700f8197cb9ee2.scope: Deactivated successfully.
Jan 27 08:32:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Jan 27 08:32:55 compute-0 podman[102300]: 2026-01-27 08:32:55.431670094 +0000 UTC m=+0.036072423 container create 3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:32:55 compute-0 systemd[1]: Started libpod-conmon-3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91.scope.
Jan 27 08:32:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f4ab156da14dd5b793381bf13430dcec54bb191c9d36ad924a7315883258fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f4ab156da14dd5b793381bf13430dcec54bb191c9d36ad924a7315883258fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f4ab156da14dd5b793381bf13430dcec54bb191c9d36ad924a7315883258fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f4ab156da14dd5b793381bf13430dcec54bb191c9d36ad924a7315883258fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f4ab156da14dd5b793381bf13430dcec54bb191c9d36ad924a7315883258fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:55 compute-0 podman[102300]: 2026-01-27 08:32:55.416660332 +0000 UTC m=+0.021062671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:32:55 compute-0 podman[102300]: 2026-01-27 08:32:55.519247902 +0000 UTC m=+0.123650231 container init 3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:55 compute-0 podman[102300]: 2026-01-27 08:32:55.525652868 +0000 UTC m=+0.130055187 container start 3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:55 compute-0 podman[102300]: 2026-01-27 08:32:55.528662401 +0000 UTC m=+0.133064740 container attach 3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:32:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:55 compute-0 ceph-mon[74357]: 11.c scrub starts
Jan 27 08:32:55 compute-0 ceph-mon[74357]: 11.c scrub ok
Jan 27 08:32:55 compute-0 ceph-mon[74357]: pgmap v241: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:32:55 compute-0 ceph-mon[74357]: 5.1c deep-scrub starts
Jan 27 08:32:55 compute-0 ceph-mon[74357]: 5.1c deep-scrub ok
Jan 27 08:32:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:55.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:56 compute-0 python3.9[102446]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:32:56 compute-0 hopeful_liskov[102340]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:32:56 compute-0 hopeful_liskov[102340]: --> relative data size: 1.0
Jan 27 08:32:56 compute-0 hopeful_liskov[102340]: --> All data devices are unavailable
Jan 27 08:32:56 compute-0 systemd[1]: libpod-3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91.scope: Deactivated successfully.
Jan 27 08:32:56 compute-0 podman[102300]: 2026-01-27 08:32:56.308200711 +0000 UTC m=+0.912603030 container died 3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-01f4ab156da14dd5b793381bf13430dcec54bb191c9d36ad924a7315883258fe-merged.mount: Deactivated successfully.
Jan 27 08:32:56 compute-0 podman[102300]: 2026-01-27 08:32:56.36271167 +0000 UTC m=+0.967113989 container remove 3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:32:56 compute-0 systemd[1]: libpod-conmon-3bf16e4f2614a5dfe85486535d3106cca4d0ee1eac339e19fcde2f52cb1f7f91.scope: Deactivated successfully.
Jan 27 08:32:56 compute-0 sudo[102162]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:56 compute-0 sudo[102544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:56 compute-0 sudo[102544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:56 compute-0 sudo[102544]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:56 compute-0 sudo[102592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:32:56 compute-0 sudo[102592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:56 compute-0 sudo[102592]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:56 compute-0 sudo[102640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:56 compute-0 sudo[102640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:56 compute-0 sudo[102640]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:56 compute-0 sudo[102692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:32:56 compute-0 sudo[102692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Jan 27 08:32:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 27 08:32:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 27 08:32:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 27 08:32:56 compute-0 python3.9[102693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:32:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 27 08:32:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 27 08:32:56 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 27 08:32:56 compute-0 podman[102758]: 2026-01-27 08:32:56.928254635 +0000 UTC m=+0.036706661 container create dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_einstein, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:32:56 compute-0 ceph-mon[74357]: 11.d scrub starts
Jan 27 08:32:56 compute-0 ceph-mon[74357]: 11.d scrub ok
Jan 27 08:32:56 compute-0 ceph-mon[74357]: pgmap v242: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Jan 27 08:32:56 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 27 08:32:56 compute-0 systemd[1]: Started libpod-conmon-dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259.scope.
Jan 27 08:32:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:32:57 compute-0 podman[102758]: 2026-01-27 08:32:57.008078551 +0000 UTC m=+0.116530617 container init dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:32:57 compute-0 podman[102758]: 2026-01-27 08:32:56.913295039 +0000 UTC m=+0.021747095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:32:57 compute-0 podman[102758]: 2026-01-27 08:32:57.013628992 +0000 UTC m=+0.122081018 container start dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:32:57 compute-0 podman[102758]: 2026-01-27 08:32:57.01698451 +0000 UTC m=+0.125436566 container attach dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_einstein, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:32:57 compute-0 systemd[1]: libpod-dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259.scope: Deactivated successfully.
Jan 27 08:32:57 compute-0 determined_einstein[102777]: 167 167
Jan 27 08:32:57 compute-0 podman[102758]: 2026-01-27 08:32:57.018844734 +0000 UTC m=+0.127296790 container died dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d94a6462079e3c5fe4d3fefac433891a254e9d122760f65c9b753b6fb83960db-merged.mount: Deactivated successfully.
Jan 27 08:32:57 compute-0 podman[102758]: 2026-01-27 08:32:57.053552365 +0000 UTC m=+0.162004391 container remove dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_einstein, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:32:57 compute-0 systemd[1]: libpod-conmon-dd8a237e2737c9856a17568b02a72011b010403449049d63cca9bcbce389f259.scope: Deactivated successfully.
Jan 27 08:32:57 compute-0 podman[102826]: 2026-01-27 08:32:57.215952556 +0000 UTC m=+0.040484330 container create c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:32:57 compute-0 systemd[1]: Started libpod-conmon-c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040.scope.
Jan 27 08:32:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556e01f930ada834c9d0777dd6f3045b33f562569cf42b97a86cea301c7e8b2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556e01f930ada834c9d0777dd6f3045b33f562569cf42b97a86cea301c7e8b2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556e01f930ada834c9d0777dd6f3045b33f562569cf42b97a86cea301c7e8b2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556e01f930ada834c9d0777dd6f3045b33f562569cf42b97a86cea301c7e8b2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:57 compute-0 podman[102826]: 2026-01-27 08:32:57.290692334 +0000 UTC m=+0.115224128 container init c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:32:57 compute-0 podman[102826]: 2026-01-27 08:32:57.197722796 +0000 UTC m=+0.022254590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:32:57 compute-0 podman[102826]: 2026-01-27 08:32:57.298746739 +0000 UTC m=+0.123278513 container start c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:32:57 compute-0 podman[102826]: 2026-01-27 08:32:57.301628943 +0000 UTC m=+0.126160737 container attach c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:57.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 27 08:32:57 compute-0 ceph-mon[74357]: osdmap e110: 3 total, 3 up, 3 in
Jan 27 08:32:57 compute-0 ceph-mon[74357]: 7.a scrub starts
Jan 27 08:32:57 compute-0 ceph-mon[74357]: 7.a scrub ok
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]: {
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:     "0": [
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:         {
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "devices": [
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "/dev/loop3"
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             ],
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "lv_name": "ceph_lv0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "lv_size": "7511998464",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "name": "ceph_lv0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "tags": {
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.cluster_name": "ceph",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.crush_device_class": "",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.encrypted": "0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.osd_id": "0",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.type": "block",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:                 "ceph.vdo": "0"
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             },
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "type": "block",
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:             "vg_name": "ceph_vg0"
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:         }
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]:     ]
Jan 27 08:32:58 compute-0 elegant_wilbur[102842]: }
Jan 27 08:32:58 compute-0 systemd[1]: libpod-c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040.scope: Deactivated successfully.
Jan 27 08:32:58 compute-0 podman[102826]: 2026-01-27 08:32:58.074851149 +0000 UTC m=+0.899382923 container died c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-556e01f930ada834c9d0777dd6f3045b33f562569cf42b97a86cea301c7e8b2d-merged.mount: Deactivated successfully.
Jan 27 08:32:58 compute-0 podman[102826]: 2026-01-27 08:32:58.129217772 +0000 UTC m=+0.953749546 container remove c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:58 compute-0 systemd[1]: libpod-conmon-c12833543b11c82b986f1dac2ef52f25aba2e127d7df1217c42155eb6df7d040.scope: Deactivated successfully.
Jan 27 08:32:58 compute-0 sudo[102692]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:58 compute-0 sudo[102989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:58 compute-0 sudo[102989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:58 compute-0 sudo[102989]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:58 compute-0 python3.9[102972]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:32:58 compute-0 sudo[103015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:32:58 compute-0 sudo[103015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:58 compute-0 sudo[103015]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:58 compute-0 sudo[103045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:32:58 compute-0 sudo[103045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:58 compute-0 sudo[103045]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:58 compute-0 sudo[103070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:32:58 compute-0 sudo[103070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.715766479 +0000 UTC m=+0.042198610 container create 6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:32:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Jan 27 08:32:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 27 08:32:58 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 27 08:32:58 compute-0 systemd[1]: Started libpod-conmon-6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318.scope.
Jan 27 08:32:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.696319124 +0000 UTC m=+0.022751275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.792931638 +0000 UTC m=+0.119363789 container init 6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.799986674 +0000 UTC m=+0.126418805 container start 6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.803526847 +0000 UTC m=+0.129958988 container attach 6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:32:58 compute-0 tender_elgamal[103175]: 167 167
Jan 27 08:32:58 compute-0 systemd[1]: libpod-6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318.scope: Deactivated successfully.
Jan 27 08:32:58 compute-0 conmon[103175]: conmon 6b17972dd1897613e929 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318.scope/container/memory.events
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.806701879 +0000 UTC m=+0.133134030 container died 6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-140967cf9e7cd82416b23a006a44fa7efb8206dae29451da98162b672c51e207-merged.mount: Deactivated successfully.
Jan 27 08:32:58 compute-0 podman[103159]: 2026-01-27 08:32:58.927138448 +0000 UTC m=+0.253570579 container remove 6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:58 compute-0 systemd[1]: libpod-conmon-6b17972dd1897613e929449fac4bbeda37353b0a92017813809892177a2e3318.scope: Deactivated successfully.
Jan 27 08:32:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 27 08:32:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 27 08:32:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 27 08:32:59 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 27 08:32:59 compute-0 podman[103273]: 2026-01-27 08:32:59.078635181 +0000 UTC m=+0.036632348 container create baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:32:59 compute-0 systemd[1]: Started libpod-conmon-baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e.scope.
Jan 27 08:32:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49609c6bc5f9db60614e67097c61a20fe5110557642528ab7dc53810a6f2bf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49609c6bc5f9db60614e67097c61a20fe5110557642528ab7dc53810a6f2bf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49609c6bc5f9db60614e67097c61a20fe5110557642528ab7dc53810a6f2bf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49609c6bc5f9db60614e67097c61a20fe5110557642528ab7dc53810a6f2bf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:59 compute-0 ceph-mon[74357]: pgmap v244: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Jan 27 08:32:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 27 08:32:59 compute-0 podman[103273]: 2026-01-27 08:32:59.144618264 +0000 UTC m=+0.102615461 container init baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:32:59 compute-0 podman[103273]: 2026-01-27 08:32:59.152470942 +0000 UTC m=+0.110468109 container start baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:32:59 compute-0 podman[103273]: 2026-01-27 08:32:59.155585592 +0000 UTC m=+0.113582759 container attach baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:32:59 compute-0 podman[103273]: 2026-01-27 08:32:59.064705285 +0000 UTC m=+0.022702492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:32:59 compute-0 sudo[103347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxykpymvwnlvepuhcncbrmsjbfnkvzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502778.9114237-333-100123931458362/AnsiballZ_setup.py'
Jan 27 08:32:59 compute-0 sudo[103347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:32:59 compute-0 python3.9[103349]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:32:59 compute-0 sudo[103347]: pam_unix(sudo:session): session closed for user root
Jan 27 08:32:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:32:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:32:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:32:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:32:59.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:32:59 compute-0 goofy_liskov[103315]: {
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:         "osd_id": 0,
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:         "type": "bluestore"
Jan 27 08:32:59 compute-0 goofy_liskov[103315]:     }
Jan 27 08:32:59 compute-0 goofy_liskov[103315]: }
Jan 27 08:33:00 compute-0 systemd[1]: libpod-baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e.scope: Deactivated successfully.
Jan 27 08:33:00 compute-0 podman[103273]: 2026-01-27 08:33:00.020187771 +0000 UTC m=+0.978184958 container died baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_liskov, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49609c6bc5f9db60614e67097c61a20fe5110557642528ab7dc53810a6f2bf0-merged.mount: Deactivated successfully.
Jan 27 08:33:00 compute-0 podman[103273]: 2026-01-27 08:33:00.071467284 +0000 UTC m=+1.029464451 container remove baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:33:00 compute-0 sudo[103459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgzriztgmywnyqjrfzwdpmfptiokthqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502778.9114237-333-100123931458362/AnsiballZ_dnf.py'
Jan 27 08:33:00 compute-0 systemd[1]: libpod-conmon-baa32f91f5e71255d8e486502e4122f2430fa64bc3873ad835bedbb4b692433e.scope: Deactivated successfully.
Jan 27 08:33:00 compute-0 sudo[103459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:00 compute-0 sudo[103070]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:33:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:33:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:33:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:33:00 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d0b0a574-22af-45c0-a4f1-5d3393e67106 does not exist
Jan 27 08:33:00 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e32ccf5b-9b7c-479c-91a5-1b3673070275 does not exist
Jan 27 08:33:00 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1da4ac7f-2e40-412a-97f7-989449cfab30 does not exist
Jan 27 08:33:00 compute-0 ceph-mon[74357]: 11.10 scrub starts
Jan 27 08:33:00 compute-0 ceph-mon[74357]: 11.10 scrub ok
Jan 27 08:33:00 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 27 08:33:00 compute-0 ceph-mon[74357]: osdmap e111: 3 total, 3 up, 3 in
Jan 27 08:33:00 compute-0 ceph-mon[74357]: 10.1 scrub starts
Jan 27 08:33:00 compute-0 ceph-mon[74357]: 10.1 scrub ok
Jan 27 08:33:00 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:33:00 compute-0 sudo[103462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:00 compute-0 sudo[103462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:00 compute-0 sudo[103462]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:00 compute-0 sudo[103487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:33:00 compute-0 sudo[103487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:00 compute-0 sudo[103487]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:00 compute-0 python3.9[103461]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:33:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 27 08:33:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 27 08:33:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 5.5 KiB/s rd, 148 B/s wr, 10 op/s; 31 B/s, 1 objects/s recovering
Jan 27 08:33:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 27 08:33:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 27 08:33:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 27 08:33:01 compute-0 ceph-mon[74357]: 7.16 deep-scrub starts
Jan 27 08:33:01 compute-0 ceph-mon[74357]: 7.16 deep-scrub ok
Jan 27 08:33:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:33:01 compute-0 ceph-mon[74357]: 5.1b scrub starts
Jan 27 08:33:01 compute-0 ceph-mon[74357]: 5.1b scrub ok
Jan 27 08:33:01 compute-0 ceph-mon[74357]: pgmap v246: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 5.5 KiB/s rd, 148 B/s wr, 10 op/s; 31 B/s, 1 objects/s recovering
Jan 27 08:33:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 27 08:33:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 27 08:33:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 27 08:33:01 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 27 08:33:01 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 27 08:33:01 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 27 08:33:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 27 08:33:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 27 08:33:01 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 27 08:33:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:01.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:02 compute-0 ceph-mon[74357]: 10.1e deep-scrub starts
Jan 27 08:33:02 compute-0 ceph-mon[74357]: 10.1e deep-scrub ok
Jan 27 08:33:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 27 08:33:02 compute-0 ceph-mon[74357]: osdmap e112: 3 total, 3 up, 3 in
Jan 27 08:33:02 compute-0 ceph-mon[74357]: 5.1f scrub starts
Jan 27 08:33:02 compute-0 ceph-mon[74357]: 5.1f scrub ok
Jan 27 08:33:02 compute-0 ceph-mon[74357]: osdmap e113: 3 total, 3 up, 3 in
Jan 27 08:33:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 27 08:33:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 27 08:33:02 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 27 08:33:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 27 08:33:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 27 08:33:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 27 08:33:03 compute-0 ceph-mon[74357]: osdmap e114: 3 total, 3 up, 3 in
Jan 27 08:33:03 compute-0 ceph-mon[74357]: pgmap v250: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 27 08:33:03 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 27 08:33:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 27 08:33:03 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 27 08:33:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:03.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 27 08:33:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 27 08:33:04 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 27 08:33:04 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 27 08:33:04 compute-0 ceph-mon[74357]: osdmap e115: 3 total, 3 up, 3 in
Jan 27 08:33:04 compute-0 ceph-mon[74357]: 11.11 scrub starts
Jan 27 08:33:04 compute-0 ceph-mon[74357]: 11.11 scrub ok
Jan 27 08:33:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 27 08:33:04 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 27 08:33:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 27 08:33:05 compute-0 ceph-mon[74357]: 10.10 scrub starts
Jan 27 08:33:05 compute-0 ceph-mon[74357]: 10.10 scrub ok
Jan 27 08:33:05 compute-0 ceph-mon[74357]: osdmap e116: 3 total, 3 up, 3 in
Jan 27 08:33:05 compute-0 ceph-mon[74357]: 11.15 scrub starts
Jan 27 08:33:05 compute-0 ceph-mon[74357]: 11.15 scrub ok
Jan 27 08:33:05 compute-0 ceph-mon[74357]: pgmap v253: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:05 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 27 08:33:05 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 115 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=83/84 n=5 ec=52/39 lis/c=83/83 les/c/f=84/84/0 sis=115 pruub=13.009972572s) [1] r=-1 lpr=115 pi=[83,115)/1 crt=45'998 mlcod 0'0 active pruub 228.030166626s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:05 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 116 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=83/84 n=5 ec=52/39 lis/c=83/83 les/c/f=84/84/0 sis=115 pruub=13.009826660s) [1] r=-1 lpr=115 pi=[83,115)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 228.030166626s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:05 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 27 08:33:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 27 08:33:05 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 27 08:33:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 27 08:33:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 27 08:33:06 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 27 08:33:06 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 118 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=83/84 n=5 ec=52/39 lis/c=83/83 les/c/f=84/84/0 sis=118) [1]/[0] r=0 lpr=118 pi=[83,118)/2 crt=45'998 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:06 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 118 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=83/84 n=5 ec=52/39 lis/c=83/83 les/c/f=84/84/0 sis=118) [1]/[0] r=0 lpr=118 pi=[83,118)/2 crt=45'998 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:33:06 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 27 08:33:06 compute-0 ceph-mon[74357]: osdmap e117: 3 total, 3 up, 3 in
Jan 27 08:33:06 compute-0 ceph-mon[74357]: osdmap e118: 3 total, 3 up, 3 in
Jan 27 08:33:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 27 08:33:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 27 08:33:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 27 08:33:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 27 08:33:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 27 08:33:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 27 08:33:07 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 27 08:33:07 compute-0 ceph-mon[74357]: 11.18 deep-scrub starts
Jan 27 08:33:07 compute-0 ceph-mon[74357]: 11.18 deep-scrub ok
Jan 27 08:33:07 compute-0 ceph-mon[74357]: pgmap v256: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 27 08:33:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 27 08:33:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 27 08:33:07 compute-0 ceph-mon[74357]: osdmap e119: 3 total, 3 up, 3 in
Jan 27 08:33:07 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 119 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=118/119 n=5 ec=52/39 lis/c=83/83 les/c/f=84/84/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[83,118)/2 crt=45'998 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:33:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:07.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:08 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 27 08:33:08 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 27 08:33:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 27 08:33:08 compute-0 ceph-mon[74357]: 11.1f scrub starts
Jan 27 08:33:08 compute-0 ceph-mon[74357]: 11.1f scrub ok
Jan 27 08:33:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 27 08:33:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 27 08:33:08 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 120 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=118/119 n=5 ec=52/39 lis/c=118/83 les/c/f=119/84/0 sis=120 pruub=14.978653908s) [1] async=[1] r=-1 lpr=120 pi=[83,120)/2 crt=45'998 mlcod 45'998 active pruub 233.087158203s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:08 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 120 pg[9.1a( v 45'998 (0'0,45'998] local-lis/les=118/119 n=5 ec=52/39 lis/c=118/83 les/c/f=119/84/0 sis=120 pruub=14.978574753s) [1] r=-1 lpr=120 pi=[83,120)/2 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 233.087158203s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.656781) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788656906, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7487, "num_deletes": 251, "total_data_size": 9862450, "memory_usage": 10174112, "flush_reason": "Manual Compaction"}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788690520, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7966137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 7623, "table_properties": {"data_size": 7938507, "index_size": 18107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 78513, "raw_average_key_size": 23, "raw_value_size": 7873433, "raw_average_value_size": 2341, "num_data_blocks": 800, "num_entries": 3363, "num_filter_entries": 3363, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502446, "oldest_key_time": 1769502446, "file_creation_time": 1769502788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 33796 microseconds, and 15137 cpu microseconds.
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.690575) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7966137 bytes OK
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.690607) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.691742) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.691768) EVENT_LOG_v1 {"time_micros": 1769502788691763, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.691792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9829686, prev total WAL file size 9829686, number of live WAL files 2.
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.694125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7779KB) 13(53KB) 8(1944B)]
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788694203, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8022930, "oldest_snapshot_seqno": -1}
Jan 27 08:33:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 27 08:33:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 27 08:33:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3175 keys, 7978316 bytes, temperature: kUnknown
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788731933, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7978316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7951162, "index_size": 18084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 76418, "raw_average_key_size": 24, "raw_value_size": 7887856, "raw_average_value_size": 2484, "num_data_blocks": 802, "num_entries": 3175, "num_filter_entries": 3175, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769502788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.732320) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7978316 bytes
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.733942) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.0 rd, 210.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.7, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3467, records dropped: 292 output_compression: NoCompression
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.733963) EVENT_LOG_v1 {"time_micros": 1769502788733952, "job": 4, "event": "compaction_finished", "compaction_time_micros": 37850, "compaction_time_cpu_micros": 16420, "output_level": 6, "num_output_files": 1, "total_output_size": 7978316, "num_input_records": 3467, "num_output_records": 3175, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788735077, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788735154, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502788735206, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 27 08:33:08 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:33:08.694005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:33:09 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 27 08:33:09 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 27 08:33:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 27 08:33:09 compute-0 ceph-mon[74357]: 5.f scrub starts
Jan 27 08:33:09 compute-0 ceph-mon[74357]: 5.f scrub ok
Jan 27 08:33:09 compute-0 ceph-mon[74357]: osdmap e120: 3 total, 3 up, 3 in
Jan 27 08:33:09 compute-0 ceph-mon[74357]: pgmap v259: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 27 08:33:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 27 08:33:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 27 08:33:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 27 08:33:09 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 27 08:33:09 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 121 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=89/90 n=5 ec=52/39 lis/c=89/89 les/c/f=90/90/0 sis=121 pruub=8.675179482s) [2] r=-1 lpr=121 pi=[89,121)/1 crt=45'998 mlcod 0'0 active pruub 227.801986694s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:09 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 121 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=89/90 n=5 ec=52/39 lis/c=89/89 les/c/f=90/90/0 sis=121 pruub=8.675122261s) [2] r=-1 lpr=121 pi=[89,121)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 227.801986694s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:33:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:09.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:33:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:10 compute-0 ceph-mon[74357]: 5.1 scrub starts
Jan 27 08:33:10 compute-0 ceph-mon[74357]: 5.1 scrub ok
Jan 27 08:33:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 27 08:33:10 compute-0 ceph-mon[74357]: osdmap e121: 3 total, 3 up, 3 in
Jan 27 08:33:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 27 08:33:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 27 08:33:10 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 27 08:33:10 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 122 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=89/90 n=5 ec=52/39 lis/c=89/89 les/c/f=90/90/0 sis=122) [2]/[0] r=0 lpr=122 pi=[89,122)/1 crt=45'998 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:10 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 122 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=89/90 n=5 ec=52/39 lis/c=89/89 les/c/f=90/90/0 sis=122) [2]/[0] r=0 lpr=122 pi=[89,122)/1 crt=45'998 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:33:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 27 08:33:11 compute-0 ceph-mon[74357]: osdmap e122: 3 total, 3 up, 3 in
Jan 27 08:33:11 compute-0 ceph-mon[74357]: pgmap v262: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 27 08:33:11 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 27 08:33:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:11.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:11 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 123 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=122/123 n=5 ec=52/39 lis/c=89/89 les/c/f=90/90/0 sis=122) [2]/[0] async=[2] r=0 lpr=122 pi=[89,122)/1 crt=45'998 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:33:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 27 08:33:12 compute-0 ceph-mon[74357]: osdmap e123: 3 total, 3 up, 3 in
Jan 27 08:33:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 27 08:33:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:12 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 27 08:33:12 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 124 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=122/123 n=5 ec=52/39 lis/c=122/89 les/c/f=123/90/0 sis=124 pruub=14.987370491s) [2] async=[2] r=-1 lpr=124 pi=[89,124)/1 crt=45'998 mlcod 45'998 active pruub 237.177825928s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:12 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 124 pg[9.1d( v 45'998 (0'0,45'998] local-lis/les=122/123 n=5 ec=52/39 lis/c=122/89 les/c/f=123/90/0 sis=124 pruub=14.987298965s) [2] r=-1 lpr=124 pi=[89,124)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 237.177825928s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Jan 27 08:33:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Jan 27 08:33:13 compute-0 sudo[103593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:13 compute-0 sudo[103593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:13 compute-0 sudo[103593]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 27 08:33:13 compute-0 ceph-mon[74357]: 7.b scrub starts
Jan 27 08:33:13 compute-0 ceph-mon[74357]: 7.b scrub ok
Jan 27 08:33:13 compute-0 ceph-mon[74357]: pgmap v264: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:13 compute-0 ceph-mon[74357]: osdmap e124: 3 total, 3 up, 3 in
Jan 27 08:33:13 compute-0 ceph-mon[74357]: 4.1b deep-scrub starts
Jan 27 08:33:13 compute-0 ceph-mon[74357]: 4.1b deep-scrub ok
Jan 27 08:33:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 27 08:33:13 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 27 08:33:13 compute-0 sudo[103618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:13 compute-0 sudo[103618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:13 compute-0 sudo[103618]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:14 compute-0 ceph-mon[74357]: osdmap e125: 3 total, 3 up, 3 in
Jan 27 08:33:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:33:14
Jan 27 08:33:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:33:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:33:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:33:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:15.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:15 compute-0 ceph-mon[74357]: pgmap v267: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:15.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 109 B/s, 2 objects/s recovering
Jan 27 08:33:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 27 08:33:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 27 08:33:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 27 08:33:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 27 08:33:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 27 08:33:16 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 27 08:33:16 compute-0 ceph-mon[74357]: 7.13 scrub starts
Jan 27 08:33:16 compute-0 ceph-mon[74357]: 7.13 scrub ok
Jan 27 08:33:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 27 08:33:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 126 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=126 pruub=8.158556938s) [1] r=-1 lpr=126 pi=[71,126)/1 crt=45'998 mlcod 0'0 active pruub 234.912429810s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 126 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=126 pruub=8.158487320s) [1] r=-1 lpr=126 pi=[71,126)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 234.912429810s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 27 08:33:17 compute-0 ceph-mon[74357]: 10.1b scrub starts
Jan 27 08:33:17 compute-0 ceph-mon[74357]: 10.1b scrub ok
Jan 27 08:33:17 compute-0 ceph-mon[74357]: pgmap v268: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 109 B/s, 2 objects/s recovering
Jan 27 08:33:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 27 08:33:17 compute-0 ceph-mon[74357]: osdmap e126: 3 total, 3 up, 3 in
Jan 27 08:33:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 27 08:33:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 27 08:33:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 127 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=127) [1]/[0] r=0 lpr=127 pi=[71,127)/1 crt=45'998 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:17 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 127 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=71/72 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=127) [1]/[0] r=0 lpr=127 pi=[71,127)/1 crt=45'998 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:33:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:17.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 110 B/s, 2 objects/s recovering
Jan 27 08:33:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 27 08:33:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:33:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 27 08:33:18 compute-0 ceph-mon[74357]: osdmap e127: 3 total, 3 up, 3 in
Jan 27 08:33:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 27 08:33:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:33:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 27 08:33:18 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 27 08:33:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 128 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=93/94 n=5 ec=52/39 lis/c=93/93 les/c/f=94/94/0 sis=128 pruub=11.024416924s) [1] r=-1 lpr=128 pi=[93,128)/1 crt=45'998 mlcod 0'0 active pruub 239.335449219s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 128 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=93/94 n=5 ec=52/39 lis/c=93/93 les/c/f=94/94/0 sis=128 pruub=11.024366379s) [1] r=-1 lpr=128 pi=[93,128)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 239.335449219s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:18 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 128 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=127/128 n=5 ec=52/39 lis/c=71/71 les/c/f=72/72/0 sis=127) [1]/[0] async=[1] r=0 lpr=127 pi=[71,127)/1 crt=45'998 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:33:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:19 compute-0 ceph-mon[74357]: pgmap v271: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 110 B/s, 2 objects/s recovering
Jan 27 08:33:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:33:19 compute-0 ceph-mon[74357]: osdmap e128: 3 total, 3 up, 3 in
Jan 27 08:33:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 27 08:33:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 27 08:33:19 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 27 08:33:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 129 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=93/94 n=5 ec=52/39 lis/c=93/93 les/c/f=94/94/0 sis=129) [1]/[0] r=0 lpr=129 pi=[93,129)/1 crt=45'998 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 129 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=127/128 n=5 ec=52/39 lis/c=127/71 les/c/f=128/72/0 sis=129 pruub=15.002028465s) [1] async=[1] r=-1 lpr=129 pi=[71,129)/1 crt=45'998 mlcod 45'998 active pruub 244.322357178s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 129 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=93/94 n=5 ec=52/39 lis/c=93/93 les/c/f=94/94/0 sis=129) [1]/[0] r=0 lpr=129 pi=[93,129)/1 crt=45'998 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:33:19 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 129 pg[9.1e( v 45'998 (0'0,45'998] local-lis/les=127/128 n=5 ec=52/39 lis/c=127/71 les/c/f=128/72/0 sis=129 pruub=15.001937866s) [1] r=-1 lpr=129 pi=[71,129)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 244.322357178s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:19.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 27 08:33:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 27 08:33:20 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 27 08:33:20 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 130 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=129/130 n=5 ec=52/39 lis/c=93/93 les/c/f=94/94/0 sis=129) [1]/[0] async=[1] r=0 lpr=129 pi=[93,129)/1 crt=45'998 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:33:20 compute-0 ceph-mon[74357]: osdmap e129: 3 total, 3 up, 3 in
Jan 27 08:33:20 compute-0 ceph-mon[74357]: pgmap v274: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 27 08:33:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 27 08:33:21 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 27 08:33:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 131 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=129/130 n=5 ec=52/39 lis/c=129/93 les/c/f=130/94/0 sis=131 pruub=15.399971008s) [1] async=[1] r=-1 lpr=131 pi=[93,131)/1 crt=45'998 mlcod 45'998 active pruub 246.334930420s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 27 08:33:21 compute-0 ceph-osd[84951]: osd.0 pg_epoch: 131 pg[9.1f( v 45'998 (0'0,45'998] local-lis/les=129/130 n=5 ec=52/39 lis/c=129/93 les/c/f=130/94/0 sis=131 pruub=15.399892807s) [1] r=-1 lpr=131 pi=[93,131)/1 crt=45'998 mlcod 0'0 unknown NOTIFY pruub 246.334930420s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:33:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:21.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:22 compute-0 ceph-mon[74357]: 10.11 scrub starts
Jan 27 08:33:22 compute-0 ceph-mon[74357]: 10.11 scrub ok
Jan 27 08:33:22 compute-0 ceph-mon[74357]: 10.2 scrub starts
Jan 27 08:33:22 compute-0 ceph-mon[74357]: 10.2 scrub ok
Jan 27 08:33:22 compute-0 ceph-mon[74357]: osdmap e130: 3 total, 3 up, 3 in
Jan 27 08:33:22 compute-0 ceph-mon[74357]: 7.1d scrub starts
Jan 27 08:33:22 compute-0 ceph-mon[74357]: 7.1d scrub ok
Jan 27 08:33:22 compute-0 ceph-mon[74357]: osdmap e131: 3 total, 3 up, 3 in
Jan 27 08:33:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 27 08:33:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 27 08:33:22 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 27 08:33:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:23 compute-0 ceph-mon[74357]: 7.f scrub starts
Jan 27 08:33:23 compute-0 ceph-mon[74357]: 7.f scrub ok
Jan 27 08:33:23 compute-0 ceph-mon[74357]: osdmap e132: 3 total, 3 up, 3 in
Jan 27 08:33:23 compute-0 ceph-mon[74357]: pgmap v278: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:23.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:33:24 compute-0 ceph-mon[74357]: 10.3 scrub starts
Jan 27 08:33:24 compute-0 ceph-mon[74357]: 10.3 scrub ok
Jan 27 08:33:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:25 compute-0 ceph-mon[74357]: pgmap v279: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:25.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:25.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:26 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 27 08:33:26 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 27 08:33:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:26 compute-0 ceph-mon[74357]: 7.6 scrub starts
Jan 27 08:33:26 compute-0 ceph-mon[74357]: 7.6 scrub ok
Jan 27 08:33:26 compute-0 ceph-mon[74357]: 11.14 scrub starts
Jan 27 08:33:26 compute-0 ceph-mon[74357]: 11.14 scrub ok
Jan 27 08:33:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 54 B/s, 3 objects/s recovering
Jan 27 08:33:27 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 27 08:33:27 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 27 08:33:27 compute-0 ceph-mon[74357]: 7.e scrub starts
Jan 27 08:33:27 compute-0 ceph-mon[74357]: 7.e scrub ok
Jan 27 08:33:27 compute-0 ceph-mon[74357]: pgmap v280: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 170 B/s wr, 11 op/s; 54 B/s, 3 objects/s recovering
Jan 27 08:33:27 compute-0 ceph-mon[74357]: 8.10 scrub starts
Jan 27 08:33:27 compute-0 ceph-mon[74357]: 8.10 scrub ok
Jan 27 08:33:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:27.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 130 B/s wr, 9 op/s; 41 B/s, 2 objects/s recovering
Jan 27 08:33:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:29.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:29 compute-0 ceph-mon[74357]: pgmap v281: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 130 B/s wr, 9 op/s; 41 B/s, 2 objects/s recovering
Jan 27 08:33:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:29.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 110 B/s wr, 7 op/s; 35 B/s, 2 objects/s recovering
Jan 27 08:33:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:31 compute-0 ceph-mon[74357]: 7.9 scrub starts
Jan 27 08:33:31 compute-0 ceph-mon[74357]: 7.9 scrub ok
Jan 27 08:33:31 compute-0 ceph-mon[74357]: pgmap v282: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 110 B/s wr, 7 op/s; 35 B/s, 2 objects/s recovering
Jan 27 08:33:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 99 B/s wr, 6 op/s; 32 B/s, 1 objects/s recovering
Jan 27 08:33:32 compute-0 ceph-mon[74357]: 10.f deep-scrub starts
Jan 27 08:33:32 compute-0 ceph-mon[74357]: 10.f deep-scrub ok
Jan 27 08:33:33 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 27 08:33:33 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 27 08:33:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:33 compute-0 sudo[103724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:33 compute-0 sudo[103724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:33 compute-0 sudo[103724]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:33 compute-0 sudo[103749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:33 compute-0 sudo[103749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:33 compute-0 sudo[103749]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:33 compute-0 ceph-mon[74357]: 8.11 scrub starts
Jan 27 08:33:33 compute-0 ceph-mon[74357]: 8.11 scrub ok
Jan 27 08:33:33 compute-0 ceph-mon[74357]: 7.1b scrub starts
Jan 27 08:33:33 compute-0 ceph-mon[74357]: 7.1b scrub ok
Jan 27 08:33:33 compute-0 ceph-mon[74357]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 99 B/s wr, 6 op/s; 32 B/s, 1 objects/s recovering
Jan 27 08:33:33 compute-0 ceph-mon[74357]: 8.17 scrub starts
Jan 27 08:33:33 compute-0 ceph-mon[74357]: 8.17 scrub ok
Jan 27 08:33:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:33.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 85 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Jan 27 08:33:34 compute-0 ceph-mon[74357]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 85 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Jan 27 08:33:35 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 27 08:33:35 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 27 08:33:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:35.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:35 compute-0 ceph-mon[74357]: 11.12 scrub starts
Jan 27 08:33:35 compute-0 ceph-mon[74357]: 11.12 scrub ok
Jan 27 08:33:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 27 08:33:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 27 08:33:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 85 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Jan 27 08:33:37 compute-0 ceph-mon[74357]: 11.13 deep-scrub starts
Jan 27 08:33:37 compute-0 ceph-mon[74357]: 11.13 deep-scrub ok
Jan 27 08:33:37 compute-0 ceph-mon[74357]: 11.1b scrub starts
Jan 27 08:33:37 compute-0 ceph-mon[74357]: 11.1b scrub ok
Jan 27 08:33:37 compute-0 ceph-mon[74357]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 85 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Jan 27 08:33:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:39.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:39 compute-0 ceph-mon[74357]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:39.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:40 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 27 08:33:40 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 27 08:33:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:40 compute-0 ceph-mon[74357]: 4.1d deep-scrub starts
Jan 27 08:33:40 compute-0 ceph-mon[74357]: 4.1d deep-scrub ok
Jan 27 08:33:40 compute-0 ceph-mon[74357]: 8.1b scrub starts
Jan 27 08:33:40 compute-0 ceph-mon[74357]: 8.1b scrub ok
Jan 27 08:33:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:41.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:33:41 compute-0 ceph-mon[74357]: 7.8 scrub starts
Jan 27 08:33:41 compute-0 ceph-mon[74357]: 7.8 scrub ok
Jan 27 08:33:41 compute-0 ceph-mon[74357]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:42 compute-0 sudo[103459]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:42 compute-0 ceph-mon[74357]: 10.8 scrub starts
Jan 27 08:33:42 compute-0 ceph-mon[74357]: 10.8 scrub ok
Jan 27 08:33:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:43.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:43 compute-0 ceph-mon[74357]: 11.19 deep-scrub starts
Jan 27 08:33:43 compute-0 ceph-mon[74357]: 11.19 deep-scrub ok
Jan 27 08:33:43 compute-0 ceph-mon[74357]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:44 compute-0 ceph-mon[74357]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:45 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 27 08:33:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:45 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 27 08:33:45 compute-0 sudo[103929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmzkczjlidcavahaygkzjuwornmwowg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502825.3568134-369-71152698512321/AnsiballZ_command.py'
Jan 27 08:33:45 compute-0 sudo[103929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:45 compute-0 python3.9[103931]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:33:45 compute-0 ceph-mon[74357]: 4.13 scrub starts
Jan 27 08:33:45 compute-0 ceph-mon[74357]: 4.13 scrub ok
Jan 27 08:33:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:45.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:33:45 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 27 08:33:45 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 27 08:33:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:46 compute-0 sudo[103929]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:46 compute-0 ceph-mon[74357]: 4.14 deep-scrub starts
Jan 27 08:33:46 compute-0 ceph-mon[74357]: 4.14 deep-scrub ok
Jan 27 08:33:46 compute-0 ceph-mon[74357]: 10.19 deep-scrub starts
Jan 27 08:33:46 compute-0 ceph-mon[74357]: 10.19 deep-scrub ok
Jan 27 08:33:46 compute-0 ceph-mon[74357]: 11.1c scrub starts
Jan 27 08:33:46 compute-0 ceph-mon[74357]: 11.1c scrub ok
Jan 27 08:33:46 compute-0 ceph-mon[74357]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:47 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 27 08:33:47 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 27 08:33:47 compute-0 sudo[104217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vivjjqhuoscppinbvtuyohoppkqmnwgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502826.8974833-393-235401868281619/AnsiballZ_selinux.py'
Jan 27 08:33:47 compute-0 sudo[104217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:47 compute-0 python3.9[104219]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 08:33:47 compute-0 sudo[104217]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:47.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:47.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:33:47 compute-0 ceph-mon[74357]: 8.2 scrub starts
Jan 27 08:33:47 compute-0 ceph-mon[74357]: 8.2 scrub ok
Jan 27 08:33:47 compute-0 ceph-mon[74357]: 4.1a scrub starts
Jan 27 08:33:47 compute-0 ceph-mon[74357]: 4.1a scrub ok
Jan 27 08:33:48 compute-0 sudo[104369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flindrhwybvrmfsvmefgdmdrbiihxddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502828.2086585-426-75247496697440/AnsiballZ_command.py'
Jan 27 08:33:48 compute-0 sudo[104369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:48 compute-0 python3.9[104371]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 08:33:48 compute-0 sudo[104369]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:48 compute-0 ceph-mon[74357]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:49 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 27 08:33:49 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 27 08:33:49 compute-0 sudo[104522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klpwtraywtxcyxrdutvcminoapthinze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502828.9604797-450-190715353717351/AnsiballZ_file.py'
Jan 27 08:33:49 compute-0 sudo[104522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:49 compute-0 python3.9[104524]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:33:49 compute-0 sudo[104522]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:49.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:50 compute-0 ceph-mon[74357]: 4.e scrub starts
Jan 27 08:33:50 compute-0 ceph-mon[74357]: 4.e scrub ok
Jan 27 08:33:50 compute-0 sudo[104674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnumafcljdrscvkzapqydspnbmwrkbbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502829.6950467-474-247345069334365/AnsiballZ_mount.py'
Jan 27 08:33:50 compute-0 sudo[104674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:50 compute-0 python3.9[104676]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 08:33:50 compute-0 sudo[104674]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:51 compute-0 ceph-mon[74357]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:51 compute-0 systemd[75978]: Created slice User Background Tasks Slice.
Jan 27 08:33:51 compute-0 systemd[75978]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 08:33:51 compute-0 systemd[75978]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 08:33:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:33:52 compute-0 sudo[104829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsvdpbmywclgfemujyehfyyplknyvprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502831.7860513-558-159541376594570/AnsiballZ_file.py'
Jan 27 08:33:52 compute-0 sudo[104829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:52 compute-0 ceph-mon[74357]: 8.16 scrub starts
Jan 27 08:33:52 compute-0 ceph-mon[74357]: 8.16 scrub ok
Jan 27 08:33:52 compute-0 ceph-mon[74357]: 7.18 scrub starts
Jan 27 08:33:52 compute-0 ceph-mon[74357]: 7.18 scrub ok
Jan 27 08:33:52 compute-0 python3.9[104831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:33:52 compute-0 sudo[104829]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:52 compute-0 sudo[104981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rznrbvoepumhrbtgflsyssfrytxkuqxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502832.5367248-582-51889469347740/AnsiballZ_stat.py'
Jan 27 08:33:52 compute-0 sudo[104981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:52 compute-0 python3.9[104983]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:33:53 compute-0 sudo[104981]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:53 compute-0 ceph-mon[74357]: 8.1f scrub starts
Jan 27 08:33:53 compute-0 ceph-mon[74357]: 8.1f scrub ok
Jan 27 08:33:53 compute-0 ceph-mon[74357]: 10.18 deep-scrub starts
Jan 27 08:33:53 compute-0 ceph-mon[74357]: 10.18 deep-scrub ok
Jan 27 08:33:53 compute-0 ceph-mon[74357]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:53 compute-0 sudo[105060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbunnrefenwnmbmhgfnndrqkehbgost ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502832.5367248-582-51889469347740/AnsiballZ_file.py'
Jan 27 08:33:53 compute-0 sudo[105060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:53 compute-0 python3.9[105062]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:33:53 compute-0 sudo[105060]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:53.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:53 compute-0 sudo[105087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:54 compute-0 sudo[105087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:54 compute-0 sudo[105087]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:54 compute-0 sudo[105112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:33:54 compute-0 sudo[105112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:33:54 compute-0 sudo[105112]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:54 compute-0 ceph-mon[74357]: 4.1c scrub starts
Jan 27 08:33:54 compute-0 ceph-mon[74357]: 4.1c scrub ok
Jan 27 08:33:54 compute-0 sudo[105262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwjnqvibfhpahzdwfyqdbgjqnqjblgye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502834.2867012-645-84017503526789/AnsiballZ_stat.py'
Jan 27 08:33:54 compute-0 sudo[105262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:54 compute-0 python3.9[105264]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:33:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:54 compute-0 sudo[105262]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:55 compute-0 ceph-mon[74357]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:55 compute-0 sudo[105417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zelbbvaygivbfzmvukeleyojsbcvvfse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502835.3681192-684-262149122121392/AnsiballZ_getent.py'
Jan 27 08:33:55 compute-0 sudo[105417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:33:56 compute-0 python3.9[105419]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 08:33:56 compute-0 sudo[105417]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:56 compute-0 sudo[105570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjwaujtlgjhiicpwaiiznkanacppgxqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502836.3799934-714-95157142175670/AnsiballZ_getent.py'
Jan 27 08:33:56 compute-0 sudo[105570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:56 compute-0 python3.9[105572]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 08:33:56 compute-0 sudo[105570]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:56 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 27 08:33:56 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 27 08:33:57 compute-0 sudo[105724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwnqaxmezqwbpiizrgcppheevexdqyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502837.0333574-738-86381354856811/AnsiballZ_group.py'
Jan 27 08:33:57 compute-0 sudo[105724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:33:57 compute-0 python3.9[105726]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:33:57 compute-0 sudo[105724]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:57 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 27 08:33:57 compute-0 ceph-mon[74357]: pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:57 compute-0 ceph-mon[74357]: 11.1d scrub starts
Jan 27 08:33:57 compute-0 ceph-mon[74357]: 11.1d scrub ok
Jan 27 08:33:57 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 27 08:33:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:57.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:58 compute-0 sudo[105876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbjthpzxurshyoiiizmcppvxzjblynns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502838.0559144-765-168601683056940/AnsiballZ_file.py'
Jan 27 08:33:58 compute-0 sudo[105876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:58 compute-0 python3.9[105878]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 08:33:58 compute-0 sudo[105876]: pam_unix(sudo:session): session closed for user root
Jan 27 08:33:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:58 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Jan 27 08:33:58 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Jan 27 08:33:58 compute-0 ceph-mon[74357]: 11.1e scrub starts
Jan 27 08:33:58 compute-0 ceph-mon[74357]: 11.1e scrub ok
Jan 27 08:33:59 compute-0 sudo[106029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-simwdupnxxmnuyceaqnegbxevdtrkkcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502839.1966014-798-98479937395129/AnsiballZ_dnf.py'
Jan 27 08:33:59 compute-0 sudo[106029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:33:59 compute-0 python3.9[106031]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:33:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:33:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:33:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:33:59 compute-0 ceph-mon[74357]: 10.14 scrub starts
Jan 27 08:33:59 compute-0 ceph-mon[74357]: 10.14 scrub ok
Jan 27 08:33:59 compute-0 ceph-mon[74357]: pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:33:59 compute-0 ceph-mon[74357]: 11.1 deep-scrub starts
Jan 27 08:33:59 compute-0 ceph-mon[74357]: 11.1 deep-scrub ok
Jan 27 08:33:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:33:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 27 08:33:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:33:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 27 08:34:00 compute-0 sudo[106033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:00 compute-0 sudo[106033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:00 compute-0 sudo[106033]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:00 compute-0 sudo[106058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:34:00 compute-0 sudo[106058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:00 compute-0 sudo[106058]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 27 08:34:00 compute-0 sudo[106083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 27 08:34:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:00 compute-0 sudo[106083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:00 compute-0 sudo[106083]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:00 compute-0 sudo[106108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:34:00 compute-0 sudo[106108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:00 compute-0 ceph-mon[74357]: pgmap v297: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:00 compute-0 sudo[106029]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:01 compute-0 sudo[106108]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:01 compute-0 sudo[106314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haytkwctcobpklbbadhipxsjfgyyzewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502841.195662-822-57866739096035/AnsiballZ_file.py'
Jan 27 08:34:01 compute-0 sudo[106314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:01 compute-0 python3.9[106316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:34:01 compute-0 sudo[106314]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:01 compute-0 ceph-mon[74357]: 11.5 scrub starts
Jan 27 08:34:01 compute-0 ceph-mon[74357]: 11.5 scrub ok
Jan 27 08:34:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:01.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:02 compute-0 sudo[106466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvfojmleuhxqtyrfbhadkumkrfloyybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502841.931365-846-255383787083220/AnsiballZ_stat.py'
Jan 27 08:34:02 compute-0 sudo[106466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:02 compute-0 python3.9[106468]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:34:02 compute-0 sudo[106466]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:34:02 compute-0 sudo[106544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhpxaiecthniqnqnqohwlsifqfvhlovq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502841.931365-846-255383787083220/AnsiballZ_file.py'
Jan 27 08:34:02 compute-0 sudo[106544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:34:02 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:02 compute-0 python3.9[106546]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:34:02 compute-0 sudo[106544]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:02 compute-0 ceph-mon[74357]: 11.17 scrub starts
Jan 27 08:34:02 compute-0 ceph-mon[74357]: 11.17 scrub ok
Jan 27 08:34:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:02 compute-0 ceph-mon[74357]: pgmap v298: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:34:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:34:03 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:34:03 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:03 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 54a9ea34-22a8-4acf-9152-df66308da70d does not exist
Jan 27 08:34:03 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1a103049-76fd-4d8b-a223-260a0933c412 does not exist
Jan 27 08:34:03 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b6b12169-e175-44ab-b229-94ffb240efbc does not exist
Jan 27 08:34:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:34:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:34:03 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:34:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:34:03 compute-0 sudo[106647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:03 compute-0 sudo[106647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:03 compute-0 sudo[106647]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:03 compute-0 sudo[106696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:34:03 compute-0 sudo[106696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:03 compute-0 sudo[106696]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:03 compute-0 sudo[106747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fksdxznybyuqnemphquehwaivqiijniu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502843.1518254-885-253934234132349/AnsiballZ_stat.py'
Jan 27 08:34:03 compute-0 sudo[106747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:03 compute-0 sudo[106748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:03 compute-0 sudo[106748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:03 compute-0 sudo[106748]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:03 compute-0 sudo[106775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:34:03 compute-0 sudo[106775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:03 compute-0 python3.9[106757]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:34:03 compute-0 sudo[106747]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:03 compute-0 ceph-mon[74357]: 10.5 scrub starts
Jan 27 08:34:03 compute-0 ceph-mon[74357]: 10.5 scrub ok
Jan 27 08:34:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:34:03 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:34:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:03 compute-0 podman[106873]: 2026-01-27 08:34:03.976197022 +0000 UTC m=+0.038519012 container create 0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:34:04 compute-0 systemd[1]: Started libpod-conmon-0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6.scope.
Jan 27 08:34:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:34:04 compute-0 sudo[106934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttuxzutxumivonhfvyuzcobxcgcdlwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502843.1518254-885-253934234132349/AnsiballZ_file.py'
Jan 27 08:34:04 compute-0 sudo[106934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:04 compute-0 podman[106873]: 2026-01-27 08:34:04.049222592 +0000 UTC m=+0.111544582 container init 0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:34:04 compute-0 podman[106873]: 2026-01-27 08:34:04.054839278 +0000 UTC m=+0.117161268 container start 0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:34:04 compute-0 podman[106873]: 2026-01-27 08:34:03.958240624 +0000 UTC m=+0.020562624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:34:04 compute-0 podman[106873]: 2026-01-27 08:34:04.057666137 +0000 UTC m=+0.119988117 container attach 0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:34:04 compute-0 condescending_wilbur[106932]: 167 167
Jan 27 08:34:04 compute-0 systemd[1]: libpod-0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6.scope: Deactivated successfully.
Jan 27 08:34:04 compute-0 conmon[106932]: conmon 0e476f599ed1358ccb6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6.scope/container/memory.events
Jan 27 08:34:04 compute-0 podman[106873]: 2026-01-27 08:34:04.060697641 +0000 UTC m=+0.123019641 container died 0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f8d94d0976c58d4b1f3bfc5010f88e8dacf60962ab38a2b811ebb64e06ee6b6-merged.mount: Deactivated successfully.
Jan 27 08:34:04 compute-0 podman[106873]: 2026-01-27 08:34:04.096450525 +0000 UTC m=+0.158772505 container remove 0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:34:04 compute-0 systemd[1]: libpod-conmon-0e476f599ed1358ccb6e366f3ebc283a63347d3defe437668a897a285bf956a6.scope: Deactivated successfully.
Jan 27 08:34:04 compute-0 python3.9[106937]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:34:04 compute-0 sudo[106934]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:04 compute-0 podman[106957]: 2026-01-27 08:34:04.277802486 +0000 UTC m=+0.046751401 container create 0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_black, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:34:04 compute-0 systemd[1]: Started libpod-conmon-0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee.scope.
Jan 27 08:34:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd420f43efffd2828274de0083f3e9c1b56abf4a15588e2579ded9625d11809/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd420f43efffd2828274de0083f3e9c1b56abf4a15588e2579ded9625d11809/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd420f43efffd2828274de0083f3e9c1b56abf4a15588e2579ded9625d11809/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd420f43efffd2828274de0083f3e9c1b56abf4a15588e2579ded9625d11809/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd420f43efffd2828274de0083f3e9c1b56abf4a15588e2579ded9625d11809/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:04 compute-0 podman[106957]: 2026-01-27 08:34:04.258684834 +0000 UTC m=+0.027633779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:34:04 compute-0 podman[106957]: 2026-01-27 08:34:04.359538958 +0000 UTC m=+0.128487903 container init 0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 27 08:34:04 compute-0 podman[106957]: 2026-01-27 08:34:04.367731935 +0000 UTC m=+0.136680860 container start 0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:34:04 compute-0 podman[106957]: 2026-01-27 08:34:04.372405226 +0000 UTC m=+0.141354171 container attach 0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:34:04 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 27 08:34:04 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 27 08:34:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:04 compute-0 ceph-mon[74357]: 4.9 scrub starts
Jan 27 08:34:04 compute-0 ceph-mon[74357]: 4.9 scrub ok
Jan 27 08:34:04 compute-0 ceph-mon[74357]: pgmap v299: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:05 compute-0 frosty_black[106994]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:34:05 compute-0 frosty_black[106994]: --> relative data size: 1.0
Jan 27 08:34:05 compute-0 frosty_black[106994]: --> All data devices are unavailable
Jan 27 08:34:05 compute-0 systemd[1]: libpod-0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee.scope: Deactivated successfully.
Jan 27 08:34:05 compute-0 podman[106957]: 2026-01-27 08:34:05.170061796 +0000 UTC m=+0.939010711 container died 0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_black, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccd420f43efffd2828274de0083f3e9c1b56abf4a15588e2579ded9625d11809-merged.mount: Deactivated successfully.
Jan 27 08:34:05 compute-0 sudo[107146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvruexznpvhkiueubfukizczsobvwyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502844.9462109-930-36778280102150/AnsiballZ_dnf.py'
Jan 27 08:34:05 compute-0 sudo[107146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:05 compute-0 podman[106957]: 2026-01-27 08:34:05.221362283 +0000 UTC m=+0.990311208 container remove 0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 27 08:34:05 compute-0 systemd[1]: libpod-conmon-0085619a6d8d9cc2a52fa573c2fe0301d70d8c1472d83c55b28136b5340918ee.scope: Deactivated successfully.
Jan 27 08:34:05 compute-0 sudo[106775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:05 compute-0 sudo[107156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:05 compute-0 sudo[107156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:05 compute-0 sudo[107156]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:05 compute-0 sudo[107181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:34:05 compute-0 sudo[107181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:05 compute-0 sudo[107181]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:05 compute-0 sudo[107206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:05 compute-0 sudo[107206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:05 compute-0 sudo[107206]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:05 compute-0 python3.9[107155]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:34:05 compute-0 sudo[107231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:34:05 compute-0 sudo[107231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.758418749 +0000 UTC m=+0.041782422 container create 03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:34:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:05.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:05 compute-0 systemd[1]: Started libpod-conmon-03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd.scope.
Jan 27 08:34:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.738594479 +0000 UTC m=+0.021958172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.842218549 +0000 UTC m=+0.125582252 container init 03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.850447308 +0000 UTC m=+0.133811001 container start 03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.854216713 +0000 UTC m=+0.137580386 container attach 03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:34:05 compute-0 kind_booth[107313]: 167 167
Jan 27 08:34:05 compute-0 systemd[1]: libpod-03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd.scope: Deactivated successfully.
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.8570074 +0000 UTC m=+0.140371083 container died 03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b99b225b73e7804bf8653d107e34e261d6c23cff2bb221f087913dede107bd11-merged.mount: Deactivated successfully.
Jan 27 08:34:05 compute-0 podman[107297]: 2026-01-27 08:34:05.890663346 +0000 UTC m=+0.174027009 container remove 03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:05 compute-0 systemd[1]: libpod-conmon-03256305bf677650d6a48a1bff781e24fbad2789b0baa3b49d4f76aece8d12cd.scope: Deactivated successfully.
Jan 27 08:34:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:05.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:06 compute-0 ceph-mon[74357]: 8.18 scrub starts
Jan 27 08:34:06 compute-0 ceph-mon[74357]: 8.18 scrub ok
Jan 27 08:34:06 compute-0 ceph-mon[74357]: 10.15 scrub starts
Jan 27 08:34:06 compute-0 ceph-mon[74357]: 10.15 scrub ok
Jan 27 08:34:06 compute-0 podman[107337]: 2026-01-27 08:34:06.082324193 +0000 UTC m=+0.056277195 container create 94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:34:06 compute-0 systemd[1]: Started libpod-conmon-94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604.scope.
Jan 27 08:34:06 compute-0 podman[107337]: 2026-01-27 08:34:06.06063817 +0000 UTC m=+0.034591182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:34:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b6850d67438f8865f253da5daf58870a546b4db5a2539dfa90e7f81cb241cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b6850d67438f8865f253da5daf58870a546b4db5a2539dfa90e7f81cb241cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b6850d67438f8865f253da5daf58870a546b4db5a2539dfa90e7f81cb241cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b6850d67438f8865f253da5daf58870a546b4db5a2539dfa90e7f81cb241cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:06 compute-0 podman[107337]: 2026-01-27 08:34:06.193785811 +0000 UTC m=+0.167738793 container init 94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:34:06 compute-0 podman[107337]: 2026-01-27 08:34:06.206036041 +0000 UTC m=+0.179989053 container start 94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elgamal, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:06 compute-0 podman[107337]: 2026-01-27 08:34:06.32578084 +0000 UTC m=+0.299733842 container attach 94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:34:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:06 compute-0 sudo[107146]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:06 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 27 08:34:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:06 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 27 08:34:06 compute-0 sad_elgamal[107353]: {
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:     "0": [
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:         {
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "devices": [
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "/dev/loop3"
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             ],
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "lv_name": "ceph_lv0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "lv_size": "7511998464",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "name": "ceph_lv0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "tags": {
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.cluster_name": "ceph",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.crush_device_class": "",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.encrypted": "0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.osd_id": "0",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.type": "block",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:                 "ceph.vdo": "0"
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             },
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "type": "block",
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:             "vg_name": "ceph_vg0"
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:         }
Jan 27 08:34:06 compute-0 sad_elgamal[107353]:     ]
Jan 27 08:34:06 compute-0 sad_elgamal[107353]: }
Jan 27 08:34:06 compute-0 systemd[1]: libpod-94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604.scope: Deactivated successfully.
Jan 27 08:34:06 compute-0 podman[107337]: 2026-01-27 08:34:06.977297708 +0000 UTC m=+0.951250690 container died 94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b6850d67438f8865f253da5daf58870a546b4db5a2539dfa90e7f81cb241cb-merged.mount: Deactivated successfully.
Jan 27 08:34:07 compute-0 podman[107337]: 2026-01-27 08:34:07.024185732 +0000 UTC m=+0.998138714 container remove 94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:34:07 compute-0 systemd[1]: libpod-conmon-94d9b1418ecd659a49d96af7071ca4204c31ffb11b4c4f06f51c780525e06604.scope: Deactivated successfully.
Jan 27 08:34:07 compute-0 sudo[107231]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:07 compute-0 ceph-mon[74357]: 7.10 scrub starts
Jan 27 08:34:07 compute-0 ceph-mon[74357]: 7.10 scrub ok
Jan 27 08:34:07 compute-0 ceph-mon[74357]: pgmap v300: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:07 compute-0 sudo[107398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:07 compute-0 sudo[107398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:07 compute-0 sudo[107398]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:07 compute-0 sudo[107423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:34:07 compute-0 sudo[107423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:07 compute-0 sudo[107423]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:07 compute-0 sudo[107448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:07 compute-0 sudo[107448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:07 compute-0 sudo[107448]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:07 compute-0 sudo[107473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:34:07 compute-0 sudo[107473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.526048301 +0000 UTC m=+0.051577834 container create 2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:34:07 compute-0 systemd[1]: Started libpod-conmon-2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75.scope.
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.496752327 +0000 UTC m=+0.022281910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:34:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.614209652 +0000 UTC m=+0.139739195 container init 2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.621102743 +0000 UTC m=+0.146632236 container start 2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:34:07 compute-0 mystifying_saha[107654]: 167 167
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.62636116 +0000 UTC m=+0.151890683 container attach 2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.62673633 +0000 UTC m=+0.152265823 container died 2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:07 compute-0 systemd[1]: libpod-2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75.scope: Deactivated successfully.
Jan 27 08:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-25970a8a173ba96da0b5fe62597da71e71c90c09d8b0dfabe9eca9a0725458c3-merged.mount: Deactivated successfully.
Jan 27 08:34:07 compute-0 podman[107627]: 2026-01-27 08:34:07.667053231 +0000 UTC m=+0.192582734 container remove 2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:34:07 compute-0 systemd[1]: libpod-conmon-2f25eb74aa680a960f3b88610ed89bfcd0a30a89b6af03fe99857409c9055a75.scope: Deactivated successfully.
Jan 27 08:34:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:07 compute-0 podman[107704]: 2026-01-27 08:34:07.813958814 +0000 UTC m=+0.046467032 container create 5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:07 compute-0 python3.9[107696]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:34:07 compute-0 systemd[1]: Started libpod-conmon-5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050.scope.
Jan 27 08:34:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e27e6008691363d9e2423d0f409635da7bde4ca037697f39fe413b1d506af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e27e6008691363d9e2423d0f409635da7bde4ca037697f39fe413b1d506af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e27e6008691363d9e2423d0f409635da7bde4ca037697f39fe413b1d506af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e27e6008691363d9e2423d0f409635da7bde4ca037697f39fe413b1d506af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:07 compute-0 podman[107704]: 2026-01-27 08:34:07.888918328 +0000 UTC m=+0.121426466 container init 5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:34:07 compute-0 podman[107704]: 2026-01-27 08:34:07.795224803 +0000 UTC m=+0.027732941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:34:07 compute-0 podman[107704]: 2026-01-27 08:34:07.895348436 +0000 UTC m=+0.127856554 container start 5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:34:07 compute-0 podman[107704]: 2026-01-27 08:34:07.89873456 +0000 UTC m=+0.131242718 container attach 5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:34:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:07.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:08 compute-0 ceph-mon[74357]: 11.4 scrub starts
Jan 27 08:34:08 compute-0 ceph-mon[74357]: 11.4 scrub ok
Jan 27 08:34:08 compute-0 recursing_hertz[107723]: {
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:         "osd_id": 0,
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:         "type": "bluestore"
Jan 27 08:34:08 compute-0 recursing_hertz[107723]:     }
Jan 27 08:34:08 compute-0 recursing_hertz[107723]: }
Jan 27 08:34:08 compute-0 systemd[1]: libpod-5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050.scope: Deactivated successfully.
Jan 27 08:34:08 compute-0 podman[107704]: 2026-01-27 08:34:08.726756186 +0000 UTC m=+0.959264314 container died 5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:34:08 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 27 08:34:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:08 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 27 08:34:08 compute-0 python3.9[107885]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 08:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82e27e6008691363d9e2423d0f409635da7bde4ca037697f39fe413b1d506af-merged.mount: Deactivated successfully.
Jan 27 08:34:08 compute-0 podman[107704]: 2026-01-27 08:34:08.810115323 +0000 UTC m=+1.042623431 container remove 5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hertz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:34:08 compute-0 systemd[1]: libpod-conmon-5a89bf112420ce765d07433555224d319f6047cd2ea7b9e8456eeb23b2363050.scope: Deactivated successfully.
Jan 27 08:34:08 compute-0 sudo[107473]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:34:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:34:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6573a919-7409-40fb-9cdd-9f076c01245c does not exist
Jan 27 08:34:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6d9b5f51-ec34-478d-ac8a-bfdd3ae8f8a5 does not exist
Jan 27 08:34:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c1ea2be8-7bff-4d60-8753-e21d4f8150fb does not exist
Jan 27 08:34:08 compute-0 sudo[107930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:08 compute-0 sudo[107930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:08 compute-0 sudo[107930]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:08 compute-0 sudo[107955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:34:08 compute-0 sudo[107955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:08 compute-0 sudo[107955]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:09 compute-0 ceph-mon[74357]: 10.13 scrub starts
Jan 27 08:34:09 compute-0 ceph-mon[74357]: 10.13 scrub ok
Jan 27 08:34:09 compute-0 ceph-mon[74357]: pgmap v301: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:34:09 compute-0 python3.9[108106]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:34:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:09.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:10 compute-0 ceph-mon[74357]: 8.8 scrub starts
Jan 27 08:34:10 compute-0 ceph-mon[74357]: 8.8 scrub ok
Jan 27 08:34:10 compute-0 sudo[108256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjqrmdeegrffhagswohakrywjeqzhxcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502850.0196702-1053-231827710902848/AnsiballZ_systemd.py'
Jan 27 08:34:10 compute-0 sudo[108256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:11 compute-0 python3.9[108258]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:34:11 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 08:34:11 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 08:34:11 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 08:34:11 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 08:34:11 compute-0 ceph-mon[74357]: pgmap v302: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:11 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 08:34:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:11 compute-0 sudo[108256]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:11 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 27 08:34:11 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 27 08:34:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:11.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:12 compute-0 ceph-mon[74357]: 8.15 scrub starts
Jan 27 08:34:12 compute-0 ceph-mon[74357]: 8.15 scrub ok
Jan 27 08:34:12 compute-0 python3.9[108421]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 08:34:12 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 27 08:34:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:12 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 27 08:34:13 compute-0 ceph-mon[74357]: 4.18 scrub starts
Jan 27 08:34:13 compute-0 ceph-mon[74357]: 4.18 scrub ok
Jan 27 08:34:13 compute-0 ceph-mon[74357]: pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Jan 27 08:34:13 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Jan 27 08:34:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:13.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:14 compute-0 sudo[108447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:14 compute-0 sudo[108447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:14 compute-0 sudo[108447]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:14 compute-0 sudo[108472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:14 compute-0 sudo[108472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:14 compute-0 sudo[108472]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:14 compute-0 ceph-mon[74357]: 4.a scrub starts
Jan 27 08:34:14 compute-0 ceph-mon[74357]: 4.a scrub ok
Jan 27 08:34:14 compute-0 ceph-mon[74357]: 7.1e deep-scrub starts
Jan 27 08:34:14 compute-0 ceph-mon[74357]: 7.1e deep-scrub ok
Jan 27 08:34:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:34:14
Jan 27 08:34:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:34:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:34:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'backups', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.control']
Jan 27 08:34:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:34:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:34:15 compute-0 ceph-mon[74357]: 4.d deep-scrub starts
Jan 27 08:34:15 compute-0 ceph-mon[74357]: 4.d deep-scrub ok
Jan 27 08:34:15 compute-0 ceph-mon[74357]: pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:15 compute-0 ceph-mon[74357]: 11.16 deep-scrub starts
Jan 27 08:34:15 compute-0 ceph-mon[74357]: 11.16 deep-scrub ok
Jan 27 08:34:15 compute-0 sudo[108623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foqdqlwtxdypumxvzgsbgvunggxfeglz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502855.3219955-1224-272531953645850/AnsiballZ_systemd.py'
Jan 27 08:34:15 compute-0 sudo[108623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:15.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:15 compute-0 python3.9[108625]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:34:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:15 compute-0 sudo[108623]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:16 compute-0 sudo[108777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvobefbjokrbgcdlfanyacwbntkkzzff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502856.1241155-1224-98814607529558/AnsiballZ_systemd.py'
Jan 27 08:34:16 compute-0 sudo[108777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:16 compute-0 ceph-mon[74357]: 5.19 scrub starts
Jan 27 08:34:16 compute-0 ceph-mon[74357]: 5.19 scrub ok
Jan 27 08:34:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:16 compute-0 python3.9[108779]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:34:16 compute-0 sudo[108777]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:17 compute-0 ceph-mon[74357]: pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:17.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:18 compute-0 sshd-session[100272]: Connection closed by 192.168.122.30 port 53510
Jan 27 08:34:18 compute-0 sshd-session[100269]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:34:18 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 27 08:34:18 compute-0 systemd[1]: session-35.scope: Consumed 1min 3.700s CPU time.
Jan 27 08:34:18 compute-0 systemd-logind[799]: Session 35 logged out. Waiting for processes to exit.
Jan 27 08:34:18 compute-0 systemd-logind[799]: Removed session 35.
Jan 27 08:34:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:19 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 27 08:34:19 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 27 08:34:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:19.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:19 compute-0 ceph-mon[74357]: pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:19 compute-0 ceph-mon[74357]: 8.3 deep-scrub starts
Jan 27 08:34:19 compute-0 ceph-mon[74357]: 8.3 deep-scrub ok
Jan 27 08:34:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:20 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 27 08:34:20 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 27 08:34:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:20 compute-0 ceph-mon[74357]: 8.14 scrub starts
Jan 27 08:34:20 compute-0 ceph-mon[74357]: 8.14 scrub ok
Jan 27 08:34:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:21.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:21 compute-0 ceph-mon[74357]: 11.f scrub starts
Jan 27 08:34:21 compute-0 ceph-mon[74357]: 11.f scrub ok
Jan 27 08:34:21 compute-0 ceph-mon[74357]: pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:21.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:22 compute-0 ceph-mon[74357]: 5.1e scrub starts
Jan 27 08:34:22 compute-0 ceph-mon[74357]: 5.1e scrub ok
Jan 27 08:34:22 compute-0 ceph-mon[74357]: pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:23 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Jan 27 08:34:23 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Jan 27 08:34:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:23.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:23.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:23 compute-0 ceph-mon[74357]: 4.19 scrub starts
Jan 27 08:34:23 compute-0 ceph-mon[74357]: 4.19 scrub ok
Jan 27 08:34:24 compute-0 sshd-session[108810]: Accepted publickey for zuul from 192.168.122.30 port 37144 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:34:24 compute-0 systemd-logind[799]: New session 36 of user zuul.
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:34:24 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 27 08:34:24 compute-0 sshd-session[108810]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:34:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:24 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 27 08:34:24 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 27 08:34:25 compute-0 ceph-mon[74357]: 5.c deep-scrub starts
Jan 27 08:34:25 compute-0 ceph-mon[74357]: 5.c deep-scrub ok
Jan 27 08:34:25 compute-0 ceph-mon[74357]: 4.c deep-scrub starts
Jan 27 08:34:25 compute-0 ceph-mon[74357]: 4.c deep-scrub ok
Jan 27 08:34:25 compute-0 ceph-mon[74357]: pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:25 compute-0 ceph-mon[74357]: 4.3 scrub starts
Jan 27 08:34:25 compute-0 ceph-mon[74357]: 4.3 scrub ok
Jan 27 08:34:25 compute-0 python3.9[108963]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:34:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:25.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:26 compute-0 ceph-mon[74357]: 5.1d scrub starts
Jan 27 08:34:26 compute-0 ceph-mon[74357]: 5.1d scrub ok
Jan 27 08:34:26 compute-0 ceph-mon[74357]: 11.7 scrub starts
Jan 27 08:34:26 compute-0 ceph-mon[74357]: 11.7 scrub ok
Jan 27 08:34:26 compute-0 ceph-mon[74357]: 6.1 scrub starts
Jan 27 08:34:26 compute-0 ceph-mon[74357]: 6.1 scrub ok
Jan 27 08:34:26 compute-0 sudo[109118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdpvwfuqibnvjodjzvvwqcpfqvicpca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502865.7031355-68-111536744820676/AnsiballZ_getent.py'
Jan 27 08:34:26 compute-0 sudo[109118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:26 compute-0 python3.9[109120]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 08:34:26 compute-0 sudo[109118]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:27 compute-0 ceph-mon[74357]: pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:27 compute-0 sudo[109272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfgtosbmppqgpwlvxwwhkhztohspksfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502866.8280938-104-79897551729259/AnsiballZ_setup.py'
Jan 27 08:34:27 compute-0 sudo[109272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:27 compute-0 python3.9[109274]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:34:27 compute-0 sudo[109272]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:27.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:27.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:28 compute-0 sudo[109356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuqcsfzrihibbpyebohiwpfvunueorc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502866.8280938-104-79897551729259/AnsiballZ_dnf.py'
Jan 27 08:34:28 compute-0 sudo[109356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:28 compute-0 python3.9[109358]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:34:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:29 compute-0 ceph-mon[74357]: pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:29 compute-0 sudo[109356]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:29.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:29.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:30 compute-0 ceph-mon[74357]: 8.f deep-scrub starts
Jan 27 08:34:30 compute-0 ceph-mon[74357]: 8.f deep-scrub ok
Jan 27 08:34:30 compute-0 sudo[109510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwauyuxkqoecbyvfwwdclwzkslxobxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502869.8535416-146-261698010868987/AnsiballZ_dnf.py'
Jan 27 08:34:30 compute-0 sudo[109510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:30 compute-0 python3.9[109512]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:34:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Jan 27 08:34:30 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Jan 27 08:34:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:31 compute-0 ceph-mon[74357]: pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:31 compute-0 sudo[109510]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:31.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:32 compute-0 ceph-mon[74357]: 8.4 deep-scrub starts
Jan 27 08:34:32 compute-0 ceph-mon[74357]: 8.4 deep-scrub ok
Jan 27 08:34:32 compute-0 ceph-mon[74357]: 8.a scrub starts
Jan 27 08:34:32 compute-0 ceph-mon[74357]: 8.a scrub ok
Jan 27 08:34:32 compute-0 sudo[109664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulsahmcqcnmtuzjtgbyoipwwismkqthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502871.8517263-170-131169541058731/AnsiballZ_systemd.py'
Jan 27 08:34:32 compute-0 sudo[109664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:32 compute-0 python3.9[109666]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:34:33 compute-0 ceph-mon[74357]: pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:33 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 27 08:34:33 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 27 08:34:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:33.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:33 compute-0 sudo[109664]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:34 compute-0 sudo[109718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:34 compute-0 sudo[109718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:34 compute-0 sudo[109718]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:34 compute-0 sudo[109772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:34 compute-0 sudo[109772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:34 compute-0 sudo[109772]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:34 compute-0 ceph-mon[74357]: 4.5 scrub starts
Jan 27 08:34:34 compute-0 ceph-mon[74357]: 4.5 scrub ok
Jan 27 08:34:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 27 08:34:34 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 27 08:34:34 compute-0 python3.9[109870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:34:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:35 compute-0 ceph-mon[74357]: 11.1a scrub starts
Jan 27 08:34:35 compute-0 ceph-mon[74357]: 11.1a scrub ok
Jan 27 08:34:35 compute-0 ceph-mon[74357]: pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:35 compute-0 ceph-mon[74357]: 4.6 scrub starts
Jan 27 08:34:35 compute-0 ceph-mon[74357]: 4.6 scrub ok
Jan 27 08:34:35 compute-0 sudo[110021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zznboxlkontayavehautzkomplqlrvjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502875.1209984-224-274734477696267/AnsiballZ_sefcontext.py'
Jan 27 08:34:35 compute-0 sudo[110021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:35.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:35 compute-0 python3.9[110023]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 08:34:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:36 compute-0 sudo[110021]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:36 compute-0 ceph-mon[74357]: 8.6 scrub starts
Jan 27 08:34:36 compute-0 ceph-mon[74357]: 8.6 scrub ok
Jan 27 08:34:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 27 08:34:36 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 27 08:34:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:36 compute-0 python3.9[110173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:34:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5d8f6f0 =====
Jan 27 08:34:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5d8f6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:38 compute-0 radosgw[92542]: beast: 0x7f84d5d8f6f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:38.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:38.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:38 compute-0 ceph-mon[74357]: 8.12 scrub starts
Jan 27 08:34:38 compute-0 ceph-mon[74357]: 8.12 scrub ok
Jan 27 08:34:38 compute-0 ceph-mon[74357]: pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:38 compute-0 ceph-mon[74357]: 11.a scrub starts
Jan 27 08:34:38 compute-0 ceph-mon[74357]: 11.a scrub ok
Jan 27 08:34:38 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 27 08:34:38 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 27 08:34:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:38 compute-0 sudo[110330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exhilcdhqrdgnztekwyxvhkefnarvted ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502878.4780862-278-225145977218533/AnsiballZ_dnf.py'
Jan 27 08:34:38 compute-0 sudo[110330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:39 compute-0 python3.9[110332]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:34:39 compute-0 ceph-mon[74357]: 8.19 scrub starts
Jan 27 08:34:39 compute-0 ceph-mon[74357]: 8.19 scrub ok
Jan 27 08:34:39 compute-0 ceph-mon[74357]: pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:40 compute-0 sudo[110330]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:40.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:40.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:40 compute-0 ceph-mon[74357]: 5.6 scrub starts
Jan 27 08:34:40 compute-0 ceph-mon[74357]: 5.6 scrub ok
Jan 27 08:34:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:40 compute-0 sudo[110484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulcxrzqyyrpunkhfyjjokshhdplymvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502880.4488146-302-228455277908153/AnsiballZ_command.py'
Jan 27 08:34:40 compute-0 sudo[110484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:41 compute-0 python3.9[110486]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:34:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:41 compute-0 ceph-mon[74357]: pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:41 compute-0 ceph-mon[74357]: 5.14 scrub starts
Jan 27 08:34:41 compute-0 ceph-mon[74357]: 5.14 scrub ok
Jan 27 08:34:41 compute-0 sudo[110484]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:42.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5d8f6f0 =====
Jan 27 08:34:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5d8f6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:42 compute-0 radosgw[92542]: beast: 0x7f84d5d8f6f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:42.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:42 compute-0 sudo[110772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzscbkxovasrqoirnpauqybqkwsimtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502882.0930157-326-252955604016327/AnsiballZ_file.py'
Jan 27 08:34:42 compute-0 sudo[110772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:42 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 27 08:34:42 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 27 08:34:42 compute-0 python3.9[110774]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 08:34:42 compute-0 sudo[110772]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:43 compute-0 python3.9[110925]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:34:43 compute-0 ceph-mon[74357]: 6.a scrub starts
Jan 27 08:34:43 compute-0 ceph-mon[74357]: 6.a scrub ok
Jan 27 08:34:43 compute-0 ceph-mon[74357]: pgmap v318: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:44 compute-0 sudo[111077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpyhwsqkdhfbkrjimxihvocxhajevter ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502883.7841816-374-68452008458368/AnsiballZ_dnf.py'
Jan 27 08:34:44 compute-0 sudo[111077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:44 compute-0 python3.9[111079]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:34:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:44.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:44 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 27 08:34:44 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 27 08:34:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:45 compute-0 sudo[111077]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:45 compute-0 ceph-mon[74357]: 6.2 scrub starts
Jan 27 08:34:45 compute-0 ceph-mon[74357]: 6.2 scrub ok
Jan 27 08:34:45 compute-0 ceph-mon[74357]: 5.a scrub starts
Jan 27 08:34:45 compute-0 ceph-mon[74357]: pgmap v319: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:45 compute-0 ceph-mon[74357]: 5.a scrub ok
Jan 27 08:34:46 compute-0 sudo[111231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvhmusytfursslhdxguoqvvropoqldvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502885.991507-401-122673647053249/AnsiballZ_dnf.py'
Jan 27 08:34:46 compute-0 sudo[111231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:46.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:46 compute-0 python3.9[111233]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:34:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:46 compute-0 ceph-mon[74357]: 5.3 scrub starts
Jan 27 08:34:46 compute-0 ceph-mon[74357]: 5.3 scrub ok
Jan 27 08:34:46 compute-0 ceph-mon[74357]: pgmap v320: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:47 compute-0 sudo[111231]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:47 compute-0 ceph-mon[74357]: 8.5 deep-scrub starts
Jan 27 08:34:47 compute-0 ceph-mon[74357]: 8.5 deep-scrub ok
Jan 27 08:34:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:34:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:34:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:48 compute-0 sudo[111385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsglhoyspdrohftfwuxeuncaeknncqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502888.2476346-437-140445370734008/AnsiballZ_stat.py'
Jan 27 08:34:48 compute-0 sudo[111385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:48 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 27 08:34:48 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 27 08:34:48 compute-0 python3.9[111387]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:34:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:48 compute-0 sudo[111385]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:48 compute-0 ceph-mon[74357]: 8.9 deep-scrub starts
Jan 27 08:34:48 compute-0 ceph-mon[74357]: 8.9 deep-scrub ok
Jan 27 08:34:48 compute-0 ceph-mon[74357]: pgmap v321: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:49 compute-0 sudo[111540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmnqvwhcwhzakykebzchzsdbewpnnmpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502889.0206568-461-141792411665665/AnsiballZ_slurp.py'
Jan 27 08:34:49 compute-0 sudo[111540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:34:49 compute-0 python3.9[111542]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 27 08:34:49 compute-0 sudo[111540]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:50 compute-0 ceph-mon[74357]: 6.7 scrub starts
Jan 27 08:34:50 compute-0 ceph-mon[74357]: 6.7 scrub ok
Jan 27 08:34:50 compute-0 ceph-mon[74357]: 5.17 scrub starts
Jan 27 08:34:50 compute-0 ceph-mon[74357]: 5.17 scrub ok
Jan 27 08:34:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:50.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:50 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 27 08:34:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:50 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 27 08:34:50 compute-0 sshd-session[108813]: Connection closed by 192.168.122.30 port 37144
Jan 27 08:34:50 compute-0 sshd-session[108810]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:34:50 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 27 08:34:50 compute-0 systemd[1]: session-36.scope: Consumed 17.703s CPU time.
Jan 27 08:34:50 compute-0 systemd-logind[799]: Session 36 logged out. Waiting for processes to exit.
Jan 27 08:34:50 compute-0 systemd-logind[799]: Removed session 36.
Jan 27 08:34:51 compute-0 ceph-mon[74357]: pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:51 compute-0 ceph-mon[74357]: 11.8 deep-scrub starts
Jan 27 08:34:51 compute-0 ceph-mon[74357]: 11.8 deep-scrub ok
Jan 27 08:34:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:52 compute-0 ceph-mon[74357]: 6.3 scrub starts
Jan 27 08:34:52 compute-0 ceph-mon[74357]: 6.3 scrub ok
Jan 27 08:34:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:52.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:52.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:52 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Jan 27 08:34:52 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Jan 27 08:34:53 compute-0 ceph-mon[74357]: pgmap v323: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:53 compute-0 ceph-mon[74357]: 8.c scrub starts
Jan 27 08:34:53 compute-0 ceph-mon[74357]: 8.c scrub ok
Jan 27 08:34:53 compute-0 ceph-mon[74357]: 6.5 deep-scrub starts
Jan 27 08:34:53 compute-0 ceph-mon[74357]: 6.5 deep-scrub ok
Jan 27 08:34:54 compute-0 sudo[111569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:54 compute-0 sudo[111569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:54 compute-0 sudo[111569]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:54.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:54 compute-0 ceph-mon[74357]: 5.5 deep-scrub starts
Jan 27 08:34:54 compute-0 ceph-mon[74357]: 5.5 deep-scrub ok
Jan 27 08:34:54 compute-0 sudo[111594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:34:54 compute-0 sudo[111594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:34:54 compute-0 sudo[111594]: pam_unix(sudo:session): session closed for user root
Jan 27 08:34:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:55 compute-0 ceph-mon[74357]: 8.b scrub starts
Jan 27 08:34:55 compute-0 ceph-mon[74357]: pgmap v324: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:55 compute-0 ceph-mon[74357]: 8.b scrub ok
Jan 27 08:34:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 27 08:34:55 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 27 08:34:56 compute-0 sshd-session[111620]: Accepted publickey for zuul from 192.168.122.30 port 33816 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:34:56 compute-0 systemd-logind[799]: New session 37 of user zuul.
Jan 27 08:34:56 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 27 08:34:56 compute-0 sshd-session[111620]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:34:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:56.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:56.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:56 compute-0 ceph-mon[74357]: 6.d scrub starts
Jan 27 08:34:56 compute-0 ceph-mon[74357]: 6.d scrub ok
Jan 27 08:34:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:56 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 27 08:34:56 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 27 08:34:57 compute-0 python3.9[111773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:34:57 compute-0 ceph-mon[74357]: 4.8 deep-scrub starts
Jan 27 08:34:57 compute-0 ceph-mon[74357]: pgmap v325: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:57 compute-0 ceph-mon[74357]: 4.8 deep-scrub ok
Jan 27 08:34:57 compute-0 ceph-mon[74357]: 9.e scrub starts
Jan 27 08:34:57 compute-0 ceph-mon[74357]: 9.e scrub ok
Jan 27 08:34:57 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 27 08:34:57 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 27 08:34:58 compute-0 python3.9[111928]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:34:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:34:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:34:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:34:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:34:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:34:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:34:58.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:34:58 compute-0 ceph-mon[74357]: 6.6 scrub starts
Jan 27 08:34:58 compute-0 ceph-mon[74357]: 4.15 scrub starts
Jan 27 08:34:58 compute-0 ceph-mon[74357]: 6.6 scrub ok
Jan 27 08:34:58 compute-0 ceph-mon[74357]: 4.15 scrub ok
Jan 27 08:34:58 compute-0 ceph-mon[74357]: 9.6 scrub starts
Jan 27 08:34:58 compute-0 ceph-mon[74357]: 9.6 scrub ok
Jan 27 08:34:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:58 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Jan 27 08:34:58 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Jan 27 08:34:59 compute-0 python3.9[112122]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:34:59 compute-0 ceph-mon[74357]: 6.9 scrub starts
Jan 27 08:34:59 compute-0 ceph-mon[74357]: 6.9 scrub ok
Jan 27 08:34:59 compute-0 ceph-mon[74357]: pgmap v326: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:34:59 compute-0 ceph-mon[74357]: 9.a deep-scrub starts
Jan 27 08:34:59 compute-0 ceph-mon[74357]: 9.a deep-scrub ok
Jan 27 08:34:59 compute-0 sshd-session[111623]: Connection closed by 192.168.122.30 port 33816
Jan 27 08:34:59 compute-0 sshd-session[111620]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:34:59 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 27 08:34:59 compute-0 systemd[1]: session-37.scope: Consumed 2.165s CPU time.
Jan 27 08:34:59 compute-0 systemd-logind[799]: Session 37 logged out. Waiting for processes to exit.
Jan 27 08:34:59 compute-0 systemd-logind[799]: Removed session 37.
Jan 27 08:35:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:00.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:00.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:00 compute-0 ceph-mon[74357]: 4.1f scrub starts
Jan 27 08:35:00 compute-0 ceph-mon[74357]: 4.1f scrub ok
Jan 27 08:35:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 27 08:35:00 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 27 08:35:01 compute-0 anacron[29966]: Job `cron.daily' started
Jan 27 08:35:01 compute-0 anacron[29966]: Job `cron.daily' terminated
Jan 27 08:35:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:01 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 27 08:35:01 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 27 08:35:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:35:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:35:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:04.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:04 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 27 08:35:05 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 27 08:35:05 compute-0 ceph-mon[74357]: 6.b scrub starts
Jan 27 08:35:05 compute-0 ceph-mon[74357]: 6.b scrub ok
Jan 27 08:35:05 compute-0 ceph-mon[74357]: pgmap v327: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:05 compute-0 ceph-mon[74357]: 6.e scrub starts
Jan 27 08:35:05 compute-0 ceph-mon[74357]: 6.e scrub ok
Jan 27 08:35:05 compute-0 sshd-session[112153]: Accepted publickey for zuul from 192.168.122.30 port 46372 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:35:05 compute-0 systemd-logind[799]: New session 38 of user zuul.
Jan 27 08:35:05 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 27 08:35:05 compute-0 sshd-session[112153]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:35:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:06.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 9.d scrub starts
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 9.d scrub ok
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 6.f scrub starts
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 6.f scrub ok
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 8.d deep-scrub starts
Jan 27 08:35:06 compute-0 ceph-mon[74357]: pgmap v328: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 8.d deep-scrub ok
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 11.e scrub starts
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 11.e scrub ok
Jan 27 08:35:06 compute-0 ceph-mon[74357]: pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 9.f scrub starts
Jan 27 08:35:06 compute-0 ceph-mon[74357]: 9.f scrub ok
Jan 27 08:35:06 compute-0 python3.9[112306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:35:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:06 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 27 08:35:06 compute-0 ceph-osd[84951]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 27 08:35:07 compute-0 python3.9[112461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:35:07 compute-0 ceph-mon[74357]: pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:07 compute-0 ceph-mon[74357]: 9.15 scrub starts
Jan 27 08:35:07 compute-0 ceph-mon[74357]: 8.1c scrub starts
Jan 27 08:35:07 compute-0 ceph-mon[74357]: 9.15 scrub ok
Jan 27 08:35:07 compute-0 ceph-mon[74357]: 8.1c scrub ok
Jan 27 08:35:08 compute-0 sudo[112615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymmkrhcxcwhrsxjkzofblrcrrhrjjme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502907.946268-80-244764245621942/AnsiballZ_setup.py'
Jan 27 08:35:08 compute-0 sudo[112615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:35:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:35:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:08 compute-0 python3.9[112617]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:35:08 compute-0 sudo[112615]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:09 compute-0 sudo[112700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugyptldblacsgqjrjvljqwgrgzgijay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502907.946268-80-244764245621942/AnsiballZ_dnf.py'
Jan 27 08:35:09 compute-0 sudo[112700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:09 compute-0 sudo[112703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:09 compute-0 sudo[112703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:09 compute-0 sudo[112703]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:09 compute-0 sudo[112728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:35:09 compute-0 sudo[112728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:09 compute-0 sudo[112728]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:09 compute-0 sudo[112753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:09 compute-0 sudo[112753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:09 compute-0 sudo[112753]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:09 compute-0 sudo[112778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:35:09 compute-0 sudo[112778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:09 compute-0 python3.9[112702]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:35:09 compute-0 ceph-mon[74357]: pgmap v331: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:09 compute-0 sudo[112778]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:10.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:10 compute-0 ceph-mon[74357]: 11.3 deep-scrub starts
Jan 27 08:35:10 compute-0 ceph-mon[74357]: 11.3 deep-scrub ok
Jan 27 08:35:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:10 compute-0 sudo[112700]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:11 compute-0 sudo[112985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqycxvhuarsscugwmhlpxrynakrodqst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502911.0013697-116-162133870424486/AnsiballZ_setup.py'
Jan 27 08:35:11 compute-0 sudo[112985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:11 compute-0 python3.9[112987]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:35:11 compute-0 ceph-mon[74357]: 9.3 scrub starts
Jan 27 08:35:11 compute-0 ceph-mon[74357]: pgmap v332: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:11 compute-0 ceph-mon[74357]: 9.3 scrub ok
Jan 27 08:35:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:11 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 39fbc9d7-aeaf-45cd-9342-9efdddea30a9 does not exist
Jan 27 08:35:11 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 009343e5-be5c-4ee1-a198-9fd9a927f01c does not exist
Jan 27 08:35:11 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 60c5429e-6d20-4e12-8a97-9ffddd39a987 does not exist
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:35:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:35:11 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:35:11 compute-0 sudo[113026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:11 compute-0 sudo[113026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:11 compute-0 sudo[113026]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:11 compute-0 sudo[112985]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:11 compute-0 sudo[113056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:35:11 compute-0 sudo[113056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:11 compute-0 sudo[113056]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:11 compute-0 sudo[113101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:11 compute-0 sudo[113101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:11 compute-0 sudo[113101]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:11 compute-0 sudo[113130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:35:12 compute-0 sudo[113130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.300177523 +0000 UTC m=+0.047820364 container create 6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:35:12 compute-0 systemd[1]: Started libpod-conmon-6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b.scope.
Jan 27 08:35:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.27511741 +0000 UTC m=+0.022760301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.384502523 +0000 UTC m=+0.132145454 container init 6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.396079143 +0000 UTC m=+0.143721974 container start 6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.399464726 +0000 UTC m=+0.147107657 container attach 6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:35:12 compute-0 zealous_brattain[113263]: 167 167
Jan 27 08:35:12 compute-0 systemd[1]: libpod-6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b.scope: Deactivated successfully.
Jan 27 08:35:12 compute-0 conmon[113263]: conmon 6ffc07e2462d95caf00e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b.scope/container/memory.events
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.402409407 +0000 UTC m=+0.150052278 container died 6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-967666bd3815b0157e382e7726794300fc91e04c8ad59d2ade16ebef8801d23f-merged.mount: Deactivated successfully.
Jan 27 08:35:12 compute-0 podman[113218]: 2026-01-27 08:35:12.4524519 +0000 UTC m=+0.200094731 container remove 6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:35:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:35:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:35:12 compute-0 systemd[1]: libpod-conmon-6ffc07e2462d95caf00e616e019f169b716d0336cd8e01e00ce09f822d47106b.scope: Deactivated successfully.
Jan 27 08:35:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:12.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:12 compute-0 podman[113316]: 2026-01-27 08:35:12.637398411 +0000 UTC m=+0.040161521 container create ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:35:12 compute-0 systemd[1]: Started libpod-conmon-ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017.scope.
Jan 27 08:35:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:35:12 compute-0 sudo[113380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocpdvmwqwlygebcrzwjjjrhrmptwthd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502912.2414072-149-95523421391756/AnsiballZ_file.py'
Jan 27 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af4ea42729ec1229fc35dc5424f6035a1848dfcabb0a6bd161d21ce64ec085/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af4ea42729ec1229fc35dc5424f6035a1848dfcabb0a6bd161d21ce64ec085/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af4ea42729ec1229fc35dc5424f6035a1848dfcabb0a6bd161d21ce64ec085/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af4ea42729ec1229fc35dc5424f6035a1848dfcabb0a6bd161d21ce64ec085/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af4ea42729ec1229fc35dc5424f6035a1848dfcabb0a6bd161d21ce64ec085/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:12 compute-0 sudo[113380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:12 compute-0 podman[113316]: 2026-01-27 08:35:12.618019155 +0000 UTC m=+0.020782315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:35:12 compute-0 podman[113316]: 2026-01-27 08:35:12.72241495 +0000 UTC m=+0.125178080 container init ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 08:35:12 compute-0 podman[113316]: 2026-01-27 08:35:12.730587415 +0000 UTC m=+0.133350535 container start ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 27 08:35:12 compute-0 podman[113316]: 2026-01-27 08:35:12.735366758 +0000 UTC m=+0.138129878 container attach ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:35:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:35:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:35:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:35:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:35:12 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:35:12 compute-0 ceph-mon[74357]: 9.19 scrub starts
Jan 27 08:35:12 compute-0 ceph-mon[74357]: 9.19 scrub ok
Jan 27 08:35:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:12 compute-0 python3.9[113383]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:12 compute-0 sudo[113380]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:13 compute-0 sudo[113545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswbptsqtrublcybbkqbcmhcoaqgdnzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502913.1158972-173-28001630284683/AnsiballZ_command.py'
Jan 27 08:35:13 compute-0 cranky_fermi[113378]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:35:13 compute-0 cranky_fermi[113378]: --> relative data size: 1.0
Jan 27 08:35:13 compute-0 cranky_fermi[113378]: --> All data devices are unavailable
Jan 27 08:35:13 compute-0 sudo[113545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:13 compute-0 systemd[1]: libpod-ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017.scope: Deactivated successfully.
Jan 27 08:35:13 compute-0 podman[113316]: 2026-01-27 08:35:13.581776748 +0000 UTC m=+0.984539858 container died ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 08:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-54af4ea42729ec1229fc35dc5424f6035a1848dfcabb0a6bd161d21ce64ec085-merged.mount: Deactivated successfully.
Jan 27 08:35:13 compute-0 podman[113316]: 2026-01-27 08:35:13.639014129 +0000 UTC m=+1.041777239 container remove ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:35:13 compute-0 systemd[1]: libpod-conmon-ac71f47cfa39b7eebcf0cdf97aaf9990cad71af95594ad934684a9f4e3ee5017.scope: Deactivated successfully.
Jan 27 08:35:13 compute-0 sudo[113130]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:13 compute-0 sudo[113563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:13 compute-0 sudo[113563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:13 compute-0 sudo[113563]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:13 compute-0 python3.9[113548]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:35:13 compute-0 ceph-mon[74357]: pgmap v333: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:13 compute-0 ceph-mon[74357]: 9.7 scrub starts
Jan 27 08:35:13 compute-0 ceph-mon[74357]: 9.7 scrub ok
Jan 27 08:35:13 compute-0 sudo[113588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:35:13 compute-0 sudo[113588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:13 compute-0 sudo[113588]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:13 compute-0 sudo[113545]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:13 compute-0 sudo[113626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:13 compute-0 sudo[113626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:13 compute-0 sudo[113626]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:13 compute-0 sudo[113652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:35:13 compute-0 sudo[113652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.154085143 +0000 UTC m=+0.035737898 container create d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatelet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:35:14 compute-0 systemd[1]: Started libpod-conmon-d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377.scope.
Jan 27 08:35:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.220945141 +0000 UTC m=+0.102597916 container init d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatelet, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.227046419 +0000 UTC m=+0.108699174 container start d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:35:14 compute-0 nostalgic_chatelet[113804]: 167 167
Jan 27 08:35:14 compute-0 systemd[1]: libpod-d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377.scope: Deactivated successfully.
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.234393283 +0000 UTC m=+0.116046088 container attach d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.234675471 +0000 UTC m=+0.116328226 container died d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.140107707 +0000 UTC m=+0.021760482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c666b9aedef42c7c6298e92959be354744c17d320f6b7221bf6f1711d6a2a489-merged.mount: Deactivated successfully.
Jan 27 08:35:14 compute-0 podman[113741]: 2026-01-27 08:35:14.275956782 +0000 UTC m=+0.157609537 container remove d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatelet, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:35:14 compute-0 systemd[1]: libpod-conmon-d3dd289f2a8da1e040ff3f4d270377e4643c32261602533fe80f7d140714e377.scope: Deactivated successfully.
Jan 27 08:35:14 compute-0 podman[113833]: 2026-01-27 08:35:14.418677886 +0000 UTC m=+0.045916551 container create 10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:35:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:14 compute-0 systemd[1]: Started libpod-conmon-10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac.scope.
Jan 27 08:35:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:14.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:14 compute-0 podman[113833]: 2026-01-27 08:35:14.396335098 +0000 UTC m=+0.023573743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:35:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5798baa4f9cfc9ef1894e9f06985bcf089ebaaf3c03c694e2f9d8b5fcb8e233/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5798baa4f9cfc9ef1894e9f06985bcf089ebaaf3c03c694e2f9d8b5fcb8e233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5798baa4f9cfc9ef1894e9f06985bcf089ebaaf3c03c694e2f9d8b5fcb8e233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5798baa4f9cfc9ef1894e9f06985bcf089ebaaf3c03c694e2f9d8b5fcb8e233/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:14 compute-0 podman[113833]: 2026-01-27 08:35:14.533361405 +0000 UTC m=+0.160600060 container init 10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 27 08:35:14 compute-0 podman[113833]: 2026-01-27 08:35:14.542070246 +0000 UTC m=+0.169308881 container start 10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:35:14 compute-0 podman[113833]: 2026-01-27 08:35:14.546162978 +0000 UTC m=+0.173401693 container attach 10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:35:14 compute-0 sudo[113879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:14 compute-0 sudo[113879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:14 compute-0 sudo[113879]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:14 compute-0 sudo[113966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxlgutuldguorehgctzlysaurbkcbiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502914.1431847-197-93433569858590/AnsiballZ_stat.py'
Jan 27 08:35:14 compute-0 sudo[113966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:14 compute-0 sudo[113942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:14 compute-0 sudo[113942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:14 compute-0 sudo[113942]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:14 compute-0 ceph-mon[74357]: 9.1a scrub starts
Jan 27 08:35:14 compute-0 ceph-mon[74357]: 9.1a scrub ok
Jan 27 08:35:14 compute-0 python3.9[113978]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:35:14 compute-0 sudo[113966]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:35:14
Jan 27 08:35:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:35:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:35:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 27 08:35:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:15 compute-0 sudo[114057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpkhtorozjnvosqdlcpeieslglxajnuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502914.1431847-197-93433569858590/AnsiballZ_file.py'
Jan 27 08:35:15 compute-0 sudo[114057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:35:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:35:15 compute-0 python3.9[114059]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:15 compute-0 sudo[114057]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]: {
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:     "0": [
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:         {
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "devices": [
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "/dev/loop3"
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             ],
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "lv_name": "ceph_lv0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "lv_size": "7511998464",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "name": "ceph_lv0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "tags": {
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.cluster_name": "ceph",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.crush_device_class": "",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.encrypted": "0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.osd_id": "0",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.type": "block",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:                 "ceph.vdo": "0"
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             },
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "type": "block",
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:             "vg_name": "ceph_vg0"
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:         }
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]:     ]
Jan 27 08:35:15 compute-0 peaceful_zhukovsky[113873]: }
Jan 27 08:35:15 compute-0 systemd[1]: libpod-10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac.scope: Deactivated successfully.
Jan 27 08:35:15 compute-0 podman[113833]: 2026-01-27 08:35:15.33723984 +0000 UTC m=+0.964478495 container died 10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5798baa4f9cfc9ef1894e9f06985bcf089ebaaf3c03c694e2f9d8b5fcb8e233-merged.mount: Deactivated successfully.
Jan 27 08:35:15 compute-0 podman[113833]: 2026-01-27 08:35:15.399634154 +0000 UTC m=+1.026872789 container remove 10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:35:15 compute-0 systemd[1]: libpod-conmon-10e4db7ae018ea1414a4f40786205b8d599cac55cf58b20b5f89ffc7511c84ac.scope: Deactivated successfully.
Jan 27 08:35:15 compute-0 sudo[113652]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:15 compute-0 sudo[114102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:15 compute-0 sudo[114102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:15 compute-0 sudo[114102]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:15 compute-0 sudo[114150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:35:15 compute-0 sudo[114150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:15 compute-0 sudo[114150]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:15 compute-0 sudo[114204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:15 compute-0 sudo[114204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:15 compute-0 sudo[114204]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:15 compute-0 sudo[114242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:35:15 compute-0 sudo[114242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:15 compute-0 sudo[114327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxhwvuzqgymbrcmqxmimjbosnbzutdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502915.5002809-233-242622004277064/AnsiballZ_stat.py'
Jan 27 08:35:15 compute-0 sudo[114327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:15 compute-0 ceph-mon[74357]: pgmap v334: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:15 compute-0 ceph-mon[74357]: 9.17 scrub starts
Jan 27 08:35:15 compute-0 ceph-mon[74357]: 9.17 scrub ok
Jan 27 08:35:15 compute-0 podman[114368]: 2026-01-27 08:35:15.964680539 +0000 UTC m=+0.036745306 container create d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:35:15 compute-0 python3.9[114329]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:35:15 compute-0 systemd[1]: Started libpod-conmon-d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4.scope.
Jan 27 08:35:16 compute-0 sudo[114327]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:35:16 compute-0 podman[114368]: 2026-01-27 08:35:16.043646171 +0000 UTC m=+0.115710968 container init d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:35:16 compute-0 podman[114368]: 2026-01-27 08:35:15.949449168 +0000 UTC m=+0.021513975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:35:16 compute-0 podman[114368]: 2026-01-27 08:35:16.050234333 +0000 UTC m=+0.122299110 container start d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:35:16 compute-0 podman[114368]: 2026-01-27 08:35:16.053948726 +0000 UTC m=+0.126013503 container attach d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:35:16 compute-0 bold_fermat[114386]: 167 167
Jan 27 08:35:16 compute-0 systemd[1]: libpod-d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4.scope: Deactivated successfully.
Jan 27 08:35:16 compute-0 podman[114368]: 2026-01-27 08:35:16.055825977 +0000 UTC m=+0.127890754 container died d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-17423ca6191a2dd091b68f5fe3e0b9a5d78e0b7ac909b95e0beeadf03af38fc8-merged.mount: Deactivated successfully.
Jan 27 08:35:16 compute-0 podman[114368]: 2026-01-27 08:35:16.098239399 +0000 UTC m=+0.170304186 container remove d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermat, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:35:16 compute-0 systemd[1]: libpod-conmon-d4ac4a9cdd3e235795786c489f3a297946539bd3d1cf7fd936643737973942d4.scope: Deactivated successfully.
Jan 27 08:35:16 compute-0 sudo[114483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbsrggtqcableoyeyyjbudfdhswittqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502915.5002809-233-242622004277064/AnsiballZ_file.py'
Jan 27 08:35:16 compute-0 sudo[114483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:16 compute-0 podman[114482]: 2026-01-27 08:35:16.276765023 +0000 UTC m=+0.048746918 container create 0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 08:35:16 compute-0 systemd[1]: Started libpod-conmon-0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3.scope.
Jan 27 08:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a89d3d5f78ac755a1d6578e4f8f9a86b989e957886cb6434d981a04d42e11b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a89d3d5f78ac755a1d6578e4f8f9a86b989e957886cb6434d981a04d42e11b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a89d3d5f78ac755a1d6578e4f8f9a86b989e957886cb6434d981a04d42e11b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a89d3d5f78ac755a1d6578e4f8f9a86b989e957886cb6434d981a04d42e11b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:16 compute-0 podman[114482]: 2026-01-27 08:35:16.251141705 +0000 UTC m=+0.023123660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:35:16 compute-0 podman[114482]: 2026-01-27 08:35:16.35554962 +0000 UTC m=+0.127531505 container init 0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khayyam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:35:16 compute-0 podman[114482]: 2026-01-27 08:35:16.361300459 +0000 UTC m=+0.133282324 container start 0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khayyam, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:35:16 compute-0 podman[114482]: 2026-01-27 08:35:16.363971293 +0000 UTC m=+0.135953198 container attach 0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khayyam, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:35:16 compute-0 python3.9[114495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:35:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:16 compute-0 sudo[114483]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.003000082s ======
Jan 27 08:35:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Jan 27 08:35:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:17 compute-0 sad_khayyam[114501]: {
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:         "osd_id": 0,
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:         "type": "bluestore"
Jan 27 08:35:17 compute-0 sad_khayyam[114501]:     }
Jan 27 08:35:17 compute-0 sad_khayyam[114501]: }
Jan 27 08:35:17 compute-0 systemd[1]: libpod-0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3.scope: Deactivated successfully.
Jan 27 08:35:17 compute-0 podman[114482]: 2026-01-27 08:35:17.218335623 +0000 UTC m=+0.990317478 container died 0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-86a89d3d5f78ac755a1d6578e4f8f9a86b989e957886cb6434d981a04d42e11b-merged.mount: Deactivated successfully.
Jan 27 08:35:17 compute-0 sudo[114674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldujjexzuqzquyeuwsznvlhrjndobdsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502916.7805378-272-51851603460243/AnsiballZ_ini_file.py'
Jan 27 08:35:17 compute-0 sudo[114674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:17 compute-0 podman[114482]: 2026-01-27 08:35:17.270651959 +0000 UTC m=+1.042633824 container remove 0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:35:17 compute-0 systemd[1]: libpod-conmon-0d4f1c710c1bb9bf7aa934c5662610454b3ce825935e4942444875e9c9e444c3.scope: Deactivated successfully.
Jan 27 08:35:17 compute-0 sudo[114242]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:35:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:35:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9d228a08-1441-4b35-b956-6b4996cf9ab1 does not exist
Jan 27 08:35:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a5249eba-2704-47d7-8cc5-302ebf955e34 does not exist
Jan 27 08:35:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a04c2016-1b6c-44c0-a172-d34e978a3c2e does not exist
Jan 27 08:35:17 compute-0 sudo[114687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:17 compute-0 sudo[114687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:17 compute-0 sudo[114687]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:17 compute-0 sudo[114712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:35:17 compute-0 sudo[114712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:17 compute-0 sudo[114712]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:17 compute-0 python3.9[114686]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:35:17 compute-0 sudo[114674]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:17 compute-0 sudo[114886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwxzjaypnwaauqlwnughjxknpspphcyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502917.6079261-272-98410192822062/AnsiballZ_ini_file.py'
Jan 27 08:35:17 compute-0 sudo[114886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:17 compute-0 ceph-mon[74357]: pgmap v335: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:17 compute-0 ceph-mon[74357]: 9.1b scrub starts
Jan 27 08:35:17 compute-0 ceph-mon[74357]: 9.1b scrub ok
Jan 27 08:35:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:35:18 compute-0 python3.9[114888]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:35:18 compute-0 sudo[114886]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:18 compute-0 sudo[115038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwtqalygwfazzugtbjzkknlvdsewexqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502918.1938071-272-173401755463749/AnsiballZ_ini_file.py'
Jan 27 08:35:18 compute-0 sudo[115038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:18 compute-0 python3.9[115040]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:35:18 compute-0 sudo[115038]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:18 compute-0 ceph-mon[74357]: 9.1e deep-scrub starts
Jan 27 08:35:18 compute-0 ceph-mon[74357]: 9.1e deep-scrub ok
Jan 27 08:35:19 compute-0 sudo[115191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwybtoftlyjpexypubofhnjzorinekfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502918.7950249-272-5906189694299/AnsiballZ_ini_file.py'
Jan 27 08:35:19 compute-0 sudo[115191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:19 compute-0 python3.9[115193]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:35:19 compute-0 sudo[115191]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:19 compute-0 ceph-mon[74357]: pgmap v336: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:20 compute-0 sudo[115343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namkxhwhlrznrbwkyefgkdhbwiousjne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502919.9604592-365-233594084850825/AnsiballZ_dnf.py'
Jan 27 08:35:20 compute-0 sudo[115343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:20 compute-0 python3.9[115345]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:35:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:20.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:20 compute-0 ceph-mon[74357]: pgmap v337: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:21 compute-0 sudo[115343]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:22 compute-0 ceph-mon[74357]: 9.13 scrub starts
Jan 27 08:35:22 compute-0 ceph-mon[74357]: 9.13 scrub ok
Jan 27 08:35:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:22.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:22 compute-0 sudo[115497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbegiqqcrmmedjnvydyvhkgkomjdmbrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502922.248518-398-237590522803446/AnsiballZ_setup.py'
Jan 27 08:35:22 compute-0 sudo[115497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:22 compute-0 python3.9[115499]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:35:22 compute-0 sudo[115497]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:23 compute-0 ceph-mon[74357]: 9.1f scrub starts
Jan 27 08:35:23 compute-0 ceph-mon[74357]: 9.1f scrub ok
Jan 27 08:35:23 compute-0 ceph-mon[74357]: pgmap v338: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:23 compute-0 sudo[115652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilqmkbmpynmfcwosnmlxyzkoxxqbwldt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502923.087061-422-93006613706907/AnsiballZ_stat.py'
Jan 27 08:35:23 compute-0 sudo[115652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:23 compute-0 python3.9[115654]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:35:23 compute-0 sudo[115652]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:35:24 compute-0 ceph-mon[74357]: 9.b scrub starts
Jan 27 08:35:24 compute-0 ceph-mon[74357]: 9.b scrub ok
Jan 27 08:35:24 compute-0 sudo[115804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqfbpznaqdzmarxtgkuikkarcrtbwglr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502923.9517066-449-71233033539554/AnsiballZ_stat.py'
Jan 27 08:35:24 compute-0 sudo[115804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:24 compute-0 python3.9[115806]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:35:24 compute-0 sudo[115804]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:24.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:25 compute-0 ceph-mon[74357]: 9.5 scrub starts
Jan 27 08:35:25 compute-0 ceph-mon[74357]: 9.5 scrub ok
Jan 27 08:35:25 compute-0 ceph-mon[74357]: pgmap v339: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:25 compute-0 sudo[115957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzghduezyurmbdfbueokicfhnbhuwba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502924.8973489-479-559643608665/AnsiballZ_command.py'
Jan 27 08:35:25 compute-0 sudo[115957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:25 compute-0 python3.9[115959]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:35:25 compute-0 sudo[115957]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:26 compute-0 sudo[116110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inxdmsjoyznueobehijaherajatxfxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502925.8251574-509-65913051381240/AnsiballZ_service_facts.py'
Jan 27 08:35:26 compute-0 sudo[116110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:26 compute-0 python3.9[116112]: ansible-service_facts Invoked
Jan 27 08:35:26 compute-0 network[116129]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:35:26 compute-0 network[116130]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:35:26 compute-0 network[116131]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:35:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:26.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:27 compute-0 ceph-mon[74357]: pgmap v340: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:28.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:29 compute-0 sudo[116110]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:29 compute-0 ceph-mon[74357]: 9.18 scrub starts
Jan 27 08:35:29 compute-0 ceph-mon[74357]: 9.18 scrub ok
Jan 27 08:35:29 compute-0 ceph-mon[74357]: pgmap v341: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:30.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:30 compute-0 ceph-mon[74357]: pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:31 compute-0 sudo[116417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evlnwzjppfnfmeqnwnqbhlbtuhryjdjk ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769502930.6266887-554-154571686903729/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769502930.6266887-554-154571686903729/args'
Jan 27 08:35:31 compute-0 sudo[116417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:31 compute-0 sudo[116417]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:31 compute-0 sudo[116584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcohyjrobxvwttolmfrtvkiwmjrrpsvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502931.514887-587-71768776007269/AnsiballZ_dnf.py'
Jan 27 08:35:31 compute-0 sudo[116584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:31 compute-0 ceph-mon[74357]: 9.8 scrub starts
Jan 27 08:35:31 compute-0 ceph-mon[74357]: 9.8 scrub ok
Jan 27 08:35:32 compute-0 python3.9[116586]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:35:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:35:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:35:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:32 compute-0 ceph-mon[74357]: pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:33 compute-0 sudo[116584]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:34 compute-0 sudo[116738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxowkmvylpkapwvmstjypktuyeifsjzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502933.9219236-626-180824699350159/AnsiballZ_package_facts.py'
Jan 27 08:35:34 compute-0 sudo[116738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:34 compute-0 sudo[116741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:34 compute-0 sudo[116741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:34 compute-0 sudo[116741]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:34 compute-0 sudo[116766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:34 compute-0 sudo[116766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:34 compute-0 sudo[116766]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:34 compute-0 python3.9[116740]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 08:35:35 compute-0 sudo[116738]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:35 compute-0 ceph-mon[74357]: pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:36 compute-0 sudo[116941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojotehhgmuhrnmwgmxxhlecalqillozu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502935.8139772-656-152791968444909/AnsiballZ_stat.py'
Jan 27 08:35:36 compute-0 sudo[116941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:36 compute-0 python3.9[116943]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:35:36 compute-0 sudo[116941]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:36 compute-0 sudo[117019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwhavcljslcbmrhzynxkpasxliwfbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502935.8139772-656-152791968444909/AnsiballZ_file.py'
Jan 27 08:35:36 compute-0 sudo[117019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:36 compute-0 python3.9[117021]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:36 compute-0 sudo[117019]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:37 compute-0 sudo[117172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vawfzhpnkhvthmeplrogcbakxdlxzyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502937.2211285-692-18421172753808/AnsiballZ_stat.py'
Jan 27 08:35:37 compute-0 sudo[117172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:37 compute-0 python3.9[117174]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:35:37 compute-0 sudo[117172]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:37 compute-0 ceph-mon[74357]: 9.9 scrub starts
Jan 27 08:35:37 compute-0 ceph-mon[74357]: pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:37 compute-0 ceph-mon[74357]: 9.9 scrub ok
Jan 27 08:35:38 compute-0 sudo[117250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qystgruruuohkqjutzssckzvjktjmnki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502937.2211285-692-18421172753808/AnsiballZ_file.py'
Jan 27 08:35:38 compute-0 sudo[117250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:38 compute-0 python3.9[117252]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:38 compute-0 sudo[117250]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:38.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:38 compute-0 ceph-mon[74357]: pgmap v346: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:39 compute-0 sudo[117403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xivkszjnxkdnpkywjhtbjdnrukjmmoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502939.274767-746-176884870402542/AnsiballZ_lineinfile.py'
Jan 27 08:35:39 compute-0 sudo[117403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:39 compute-0 python3.9[117405]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:39 compute-0 sudo[117403]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:39 compute-0 ceph-mon[74357]: 9.16 scrub starts
Jan 27 08:35:39 compute-0 ceph-mon[74357]: 9.16 scrub ok
Jan 27 08:35:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:40.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:41 compute-0 ceph-mon[74357]: pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:41 compute-0 sudo[117556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqmnnoxtjodyoppmhhfwfvhihsmpwxup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502940.9598577-791-117911449650879/AnsiballZ_setup.py'
Jan 27 08:35:41 compute-0 sudo[117556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:41 compute-0 python3.9[117558]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:35:41 compute-0 sudo[117556]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:42 compute-0 sudo[117640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfctyyootldmvhwvzxunyvoazslqydle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502940.9598577-791-117911449650879/AnsiballZ_systemd.py'
Jan 27 08:35:42 compute-0 sudo[117640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:42 compute-0 python3.9[117642]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:35:42 compute-0 sudo[117640]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:43 compute-0 sshd-session[112156]: Connection closed by 192.168.122.30 port 46372
Jan 27 08:35:43 compute-0 sshd-session[112153]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:35:43 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 27 08:35:43 compute-0 systemd[1]: session-38.scope: Consumed 23.256s CPU time.
Jan 27 08:35:43 compute-0 systemd-logind[799]: Session 38 logged out. Waiting for processes to exit.
Jan 27 08:35:43 compute-0 systemd-logind[799]: Removed session 38.
Jan 27 08:35:43 compute-0 ceph-mon[74357]: pgmap v348: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:44 compute-0 ceph-mon[74357]: 9.1d scrub starts
Jan 27 08:35:44 compute-0 ceph-mon[74357]: 9.1d scrub ok
Jan 27 08:35:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:45 compute-0 ceph-mon[74357]: pgmap v349: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:46.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:46.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:46 compute-0 ceph-mon[74357]: pgmap v350: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:48.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:48.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:48 compute-0 sshd-session[117672]: Accepted publickey for zuul from 192.168.122.30 port 52142 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:35:48 compute-0 systemd-logind[799]: New session 39 of user zuul.
Jan 27 08:35:48 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 27 08:35:48 compute-0 sshd-session[117672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:35:49 compute-0 sudo[117826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffagxoryrijsqrzjllduecoqcunbripq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502948.9679825-26-54574221336845/AnsiballZ_file.py'
Jan 27 08:35:49 compute-0 sudo[117826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:49 compute-0 python3.9[117828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:49 compute-0 sudo[117826]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:50 compute-0 ceph-mon[74357]: pgmap v351: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:50 compute-0 sudo[117978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxwciwofjackkzwcnajbjaqsjyuqbyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502949.8461878-62-223983019665139/AnsiballZ_stat.py'
Jan 27 08:35:50 compute-0 sudo[117978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:50 compute-0 python3.9[117980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:35:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:50 compute-0 sudo[117978]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:50.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:50 compute-0 sudo[118056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aylgdwxwldliiohkscyzzfrqjxghhyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502949.8461878-62-223983019665139/AnsiballZ_file.py'
Jan 27 08:35:50 compute-0 sudo[118056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:50 compute-0 python3.9[118058]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:50 compute-0 sudo[118056]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:51 compute-0 ceph-mon[74357]: pgmap v352: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:51 compute-0 sshd-session[117675]: Connection closed by 192.168.122.30 port 52142
Jan 27 08:35:51 compute-0 sshd-session[117672]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:35:51 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 27 08:35:51 compute-0 systemd[1]: session-39.scope: Consumed 1.407s CPU time.
Jan 27 08:35:51 compute-0 systemd-logind[799]: Session 39 logged out. Waiting for processes to exit.
Jan 27 08:35:51 compute-0 systemd-logind[799]: Removed session 39.
Jan 27 08:35:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:35:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:52.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:35:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:53 compute-0 ceph-mon[74357]: pgmap v353: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:35:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:35:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:35:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:54.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:35:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:54 compute-0 sudo[118085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:54 compute-0 sudo[118085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:54 compute-0 sudo[118085]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:54 compute-0 sudo[118110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:35:54 compute-0 sudo[118110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:35:54 compute-0 sudo[118110]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:55 compute-0 ceph-mon[74357]: pgmap v354: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:56.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:56 compute-0 sshd-session[118136]: Accepted publickey for zuul from 192.168.122.30 port 41778 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:35:56 compute-0 systemd-logind[799]: New session 40 of user zuul.
Jan 27 08:35:56 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 27 08:35:56 compute-0 sshd-session[118136]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:35:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:56 compute-0 ceph-mon[74357]: pgmap v355: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:57 compute-0 python3.9[118290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:35:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:35:58.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:35:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:35:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:35:58.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:35:58 compute-0 sudo[118444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gelfqcozaxtxhmhfcleerakqzpfbyays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502958.2408335-59-194113230534489/AnsiballZ_file.py'
Jan 27 08:35:58 compute-0 sudo[118444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:58 compute-0 python3.9[118446]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:35:58 compute-0 sudo[118444]: pam_unix(sudo:session): session closed for user root
Jan 27 08:35:59 compute-0 sudo[118620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmgdaztxgffjgwbhidkwgzqqrmsrxnqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502959.1624722-83-121116361714512/AnsiballZ_stat.py'
Jan 27 08:35:59 compute-0 sudo[118620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:35:59 compute-0 ceph-mon[74357]: pgmap v356: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:35:59 compute-0 python3.9[118622]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:35:59 compute-0 sudo[118620]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:00 compute-0 sudo[118698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grdpngpjoqwjzmurgtvskmrqhxoqsyfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502959.1624722-83-121116361714512/AnsiballZ_file.py'
Jan 27 08:36:00 compute-0 sudo[118698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:00 compute-0 python3.9[118700]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fe1qqx_t recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:00 compute-0 sudo[118698]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:00.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:00.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:00 compute-0 ceph-mon[74357]: pgmap v357: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:01 compute-0 sudo[118851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobhuzrijfmwtayrgnnyucvhaanxbkka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502960.9247506-143-220398740189497/AnsiballZ_stat.py'
Jan 27 08:36:01 compute-0 sudo[118851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:01 compute-0 python3.9[118853]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:01 compute-0 sudo[118851]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:01 compute-0 sudo[118929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gemghcmrqefozowlbaknslknhlcotjgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502960.9247506-143-220398740189497/AnsiballZ_file.py'
Jan 27 08:36:01 compute-0 sudo[118929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:01 compute-0 python3.9[118931]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.qvit01up recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:01 compute-0 sudo[118929]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:02 compute-0 sudo[119081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmutnqulkhpvdyqkqpsbybylhnoufldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502962.1509485-182-164935657794706/AnsiballZ_file.py'
Jan 27 08:36:02 compute-0 sudo[119081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:02.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:02 compute-0 python3.9[119083]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:36:02 compute-0 sudo[119081]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:03 compute-0 sudo[119234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uukswfmsmnlwdqdekxiaovsovqnihmeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502962.8929772-206-162375657963587/AnsiballZ_stat.py'
Jan 27 08:36:03 compute-0 sudo[119234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:03 compute-0 python3.9[119236]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:03 compute-0 sudo[119234]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:03 compute-0 sudo[119312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjxrvxunrlmxvaoznesudxampipdeyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502962.8929772-206-162375657963587/AnsiballZ_file.py'
Jan 27 08:36:03 compute-0 sudo[119312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:03 compute-0 ceph-mon[74357]: pgmap v358: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:03 compute-0 python3.9[119314]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:36:03 compute-0 sudo[119312]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:04 compute-0 sudo[119464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhfhsiswpycetuykgbvxsfysabykrwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502964.0771835-206-87095506493208/AnsiballZ_stat.py'
Jan 27 08:36:04 compute-0 sudo[119464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:04.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:04 compute-0 python3.9[119466]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:04.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:04 compute-0 sudo[119464]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:04 compute-0 sudo[119542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raoiinfomtavynudvgqluatymsfyvlni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502964.0771835-206-87095506493208/AnsiballZ_file.py'
Jan 27 08:36:04 compute-0 sudo[119542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:04 compute-0 ceph-mon[74357]: pgmap v359: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:05 compute-0 python3.9[119544]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:36:05 compute-0 sudo[119542]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:05 compute-0 sudo[119695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxqzbgxkhfktrdbkdkjmgyjwdkeiqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502965.2583055-275-88623572129150/AnsiballZ_file.py'
Jan 27 08:36:05 compute-0 sudo[119695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:05 compute-0 python3.9[119697]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:05 compute-0 sudo[119695]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:06 compute-0 sudo[119847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzjkmlvzeffzvwxkybkabejoyphfmwic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502966.0068047-299-56608322992218/AnsiballZ_stat.py'
Jan 27 08:36:06 compute-0 sudo[119847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:06 compute-0 python3.9[119849]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:06 compute-0 sudo[119847]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:36:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:06.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:36:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:06.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:06 compute-0 sudo[119925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnzopfsadksvunxtnrrjihbjjicwedm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502966.0068047-299-56608322992218/AnsiballZ_file.py'
Jan 27 08:36:06 compute-0 sudo[119925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:06 compute-0 python3.9[119927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:06 compute-0 sudo[119925]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:07 compute-0 sudo[120078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dywpvlsultlxcdzwoeqpaywlbffuifuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502967.2499962-335-182984566390607/AnsiballZ_stat.py'
Jan 27 08:36:07 compute-0 sudo[120078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:07 compute-0 python3.9[120080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:07 compute-0 sudo[120078]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:07 compute-0 ceph-mon[74357]: pgmap v360: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:08 compute-0 sudo[120156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpwwhinriaczblcohwrftztrpqxjivix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502967.2499962-335-182984566390607/AnsiballZ_file.py'
Jan 27 08:36:08 compute-0 sudo[120156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:08 compute-0 python3.9[120158]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:08 compute-0 sudo[120156]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:08.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:08 compute-0 ceph-mon[74357]: pgmap v361: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:09 compute-0 sudo[120309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzpsfzmnkkflqmgmmxfouvydjcxyqnck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502968.5909915-371-137028640044672/AnsiballZ_systemd.py'
Jan 27 08:36:09 compute-0 sudo[120309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:09 compute-0 python3.9[120311]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:36:09 compute-0 systemd[1]: Reloading.
Jan 27 08:36:09 compute-0 systemd-rc-local-generator[120332]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:36:09 compute-0 systemd-sysv-generator[120338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:36:10 compute-0 sudo[120309]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:10.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:10 compute-0 sudo[120498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zedzjsaspewlmfcjgbpncuzyvmqyjphp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502970.2748394-395-274091465911880/AnsiballZ_stat.py'
Jan 27 08:36:10 compute-0 sudo[120498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:10.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:10 compute-0 python3.9[120500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:10 compute-0 sudo[120498]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:10 compute-0 sudo[120576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lssloxarskgynoauuhnbsnjbrovnoaey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502970.2748394-395-274091465911880/AnsiballZ_file.py'
Jan 27 08:36:10 compute-0 sudo[120576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:11 compute-0 ceph-mon[74357]: pgmap v362: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:11 compute-0 python3.9[120578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:11 compute-0 sudo[120576]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:11 compute-0 sudo[120729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vocnbmsnocdeiynmlrtnrlqllexlywpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502971.528881-431-2703976597921/AnsiballZ_stat.py'
Jan 27 08:36:11 compute-0 sudo[120729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:11 compute-0 python3.9[120731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:11 compute-0 sudo[120729]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:12 compute-0 sudo[120807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvrhejulbvfpddrrmciiwtrrgenmckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502971.528881-431-2703976597921/AnsiballZ_file.py'
Jan 27 08:36:12 compute-0 sudo[120807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:12 compute-0 python3.9[120809]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:12 compute-0 sudo[120807]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:36:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:12.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:36:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:12.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:13 compute-0 sudo[120959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibmwuwroiwcyocxfkxtyfibijzpqgvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502972.6868064-467-143789924747920/AnsiballZ_systemd.py'
Jan 27 08:36:13 compute-0 sudo[120959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:13 compute-0 python3.9[120961]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:36:13 compute-0 systemd[1]: Reloading.
Jan 27 08:36:13 compute-0 systemd-rc-local-generator[120985]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:36:13 compute-0 systemd-sysv-generator[120991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:36:13 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 08:36:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:36:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:36:13 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 08:36:13 compute-0 sudo[120959]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:13 compute-0 ceph-mon[74357]: pgmap v363: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:13.927021) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502973927381, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2354, "num_deletes": 251, "total_data_size": 3527466, "memory_usage": 3600864, "flush_reason": "Manual Compaction"}
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502973992700, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3458688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7624, "largest_seqno": 9977, "table_properties": {"data_size": 3448707, "index_size": 5899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25346, "raw_average_key_size": 21, "raw_value_size": 3426500, "raw_average_value_size": 2884, "num_data_blocks": 263, "num_entries": 1188, "num_filter_entries": 1188, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502789, "oldest_key_time": 1769502789, "file_creation_time": 1769502973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 65717 microseconds, and 9278 cpu microseconds.
Jan 27 08:36:13 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:13.992764) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3458688 bytes OK
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:13.992787) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.039233) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.039284) EVENT_LOG_v1 {"time_micros": 1769502974039271, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.039312) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3517349, prev total WAL file size 3517349, number of live WAL files 2.
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.040460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3377KB)], [20(7791KB)]
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502974040527, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11437004, "oldest_snapshot_seqno": -1}
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3842 keys, 9731670 bytes, temperature: kUnknown
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502974235858, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9731670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9700152, "index_size": 20784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 92588, "raw_average_key_size": 24, "raw_value_size": 9625039, "raw_average_value_size": 2505, "num_data_blocks": 909, "num_entries": 3842, "num_filter_entries": 3842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769502974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.236132) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9731670 bytes
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.282026) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.5 rd, 49.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4363, records dropped: 521 output_compression: NoCompression
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.282073) EVENT_LOG_v1 {"time_micros": 1769502974282055, "job": 6, "event": "compaction_finished", "compaction_time_micros": 195438, "compaction_time_cpu_micros": 23596, "output_level": 6, "num_output_files": 1, "total_output_size": 9731670, "num_input_records": 4363, "num_output_records": 3842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502974283186, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769502974284858, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.040307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.284912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.284917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.284919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.284920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:36:14 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:36:14.284922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:36:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:14 compute-0 ceph-mon[74357]: pgmap v364: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:36:14
Jan 27 08:36:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:36:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:36:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Jan 27 08:36:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:36:15 compute-0 sudo[121082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:15 compute-0 sudo[121082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:15 compute-0 sudo[121082]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:36:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:36:15 compute-0 sudo[121131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:15 compute-0 sudo[121131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:15 compute-0 sudo[121131]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:15 compute-0 python3.9[121206]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:36:15 compute-0 network[121223]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:36:15 compute-0 network[121224]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:36:15 compute-0 network[121225]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:36:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:17 compute-0 ceph-mon[74357]: pgmap v365: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:17 compute-0 sudo[121292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:17 compute-0 sudo[121292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:17 compute-0 sudo[121292]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:17 compute-0 sudo[121317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:36:17 compute-0 sudo[121317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:17 compute-0 sudo[121317]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:17 compute-0 sudo[121342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:17 compute-0 sudo[121342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:17 compute-0 sudo[121342]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:17 compute-0 sudo[121367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:36:17 compute-0 sudo[121367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:18 compute-0 sudo[121367]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:36:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:36:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:18.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:19 compute-0 ceph-mon[74357]: pgmap v366: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:20 compute-0 sudo[121618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilatftmsysrionnlvcqrijuxufivfkum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502979.954581-545-271548730588083/AnsiballZ_stat.py'
Jan 27 08:36:20 compute-0 sudo[121618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:20 compute-0 python3.9[121620]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:20 compute-0 sudo[121618]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:20.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:20 compute-0 sudo[121696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tshbjcjvotnigfgoggbunqcpyoiimjpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502979.954581-545-271548730588083/AnsiballZ_file.py'
Jan 27 08:36:20 compute-0 sudo[121696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:20 compute-0 python3.9[121698]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:20 compute-0 sudo[121696]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:21 compute-0 ceph-mon[74357]: pgmap v367: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:21 compute-0 sudo[121849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohsgmirbyrvvrvvmxqrlamnwuwhtdjpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502981.2643209-584-227989265258440/AnsiballZ_file.py'
Jan 27 08:36:21 compute-0 sudo[121849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:36:21 compute-0 python3.9[121851]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:21 compute-0 sudo[121849]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:36:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:22 compute-0 sudo[122001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdsybabdcamporbxppufigdjxsdeuypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502982.1522045-608-230371664973247/AnsiballZ_stat.py'
Jan 27 08:36:22 compute-0 sudo[122001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:22.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:22 compute-0 python3.9[122003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:22 compute-0 sudo[122001]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:36:22 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:36:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:36:22 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:36:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:36:23 compute-0 sudo[122079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raazdyhmbynjugpnbhrkhuoszehnxzss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502982.1522045-608-230371664973247/AnsiballZ_file.py'
Jan 27 08:36:23 compute-0 sudo[122079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:23 compute-0 ceph-mon[74357]: pgmap v368: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:23 compute-0 python3.9[122081]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:23 compute-0 sudo[122079]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 511dab26-8ec7-4eff-a49c-7b576980d6c4 does not exist
Jan 27 08:36:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 332f38cb-6872-45aa-b288-dbce54d89898 does not exist
Jan 27 08:36:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 94924c3f-0048-4a1c-834c-c58dc3aa1979 does not exist
Jan 27 08:36:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:36:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:36:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:36:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:36:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:36:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:36:23 compute-0 sudo[122107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:23 compute-0 sudo[122107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:23 compute-0 sudo[122107]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:23 compute-0 sudo[122132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:36:23 compute-0 sudo[122132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:23 compute-0 sudo[122132]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:23 compute-0 sudo[122158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:23 compute-0 sudo[122158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:23 compute-0 sudo[122158]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:23 compute-0 sudo[122209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:36:23 compute-0 sudo[122209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.095780262 +0000 UTC m=+0.020207130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.203642492 +0000 UTC m=+0.128069340 container create dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:36:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:36:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:36:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:36:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:36:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:36:24 compute-0 systemd[1]: Started libpod-conmon-dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d.scope.
Jan 27 08:36:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.345112443 +0000 UTC m=+0.269539321 container init dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.35273905 +0000 UTC m=+0.277165928 container start dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.358553648 +0000 UTC m=+0.282980536 container attach dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:36:24 compute-0 recursing_perlman[122362]: 167 167
Jan 27 08:36:24 compute-0 systemd[1]: libpod-dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d.scope: Deactivated successfully.
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.362089964 +0000 UTC m=+0.286516832 container died dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:36:24 compute-0 sudo[122392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srverbjaliycqqhdzdmyubmwpjqadtez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502983.7438667-653-109689960287214/AnsiballZ_timezone.py'
Jan 27 08:36:24 compute-0 sudo[122392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5186a08ff3f2c0a492f63a367bd58afc647dde8067c569bd9670c62cf61062-merged.mount: Deactivated successfully.
Jan 27 08:36:24 compute-0 podman[122299]: 2026-01-27 08:36:24.415134655 +0000 UTC m=+0.339561503 container remove dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:36:24 compute-0 systemd[1]: libpod-conmon-dc480fd1fa0333dc6ec088c518ed1e154d9c51ef0961aad06494ca439880528d.scope: Deactivated successfully.
Jan 27 08:36:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:24.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:24 compute-0 python3.9[122397]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 08:36:24 compute-0 podman[122416]: 2026-01-27 08:36:24.620478651 +0000 UTC m=+0.060500634 container create 3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:36:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:24.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:24 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 08:36:24 compute-0 systemd[1]: Started libpod-conmon-3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6.scope.
Jan 27 08:36:24 compute-0 podman[122416]: 2026-01-27 08:36:24.589970713 +0000 UTC m=+0.029992766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:36:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c21a617df4a114e7a1344c58753efb8c606379a227cd90ac26e230a390b4f3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c21a617df4a114e7a1344c58753efb8c606379a227cd90ac26e230a390b4f3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c21a617df4a114e7a1344c58753efb8c606379a227cd90ac26e230a390b4f3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c21a617df4a114e7a1344c58753efb8c606379a227cd90ac26e230a390b4f3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c21a617df4a114e7a1344c58753efb8c606379a227cd90ac26e230a390b4f3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:24 compute-0 podman[122416]: 2026-01-27 08:36:24.739687328 +0000 UTC m=+0.179709311 container init 3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:36:24 compute-0 podman[122416]: 2026-01-27 08:36:24.750665757 +0000 UTC m=+0.190687740 container start 3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:36:24 compute-0 podman[122416]: 2026-01-27 08:36:24.754686625 +0000 UTC m=+0.194708658 container attach 3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:36:24 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 08:36:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:24 compute-0 sudo[122392]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:25 compute-0 sudo[122596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aspresxpzxifpgbnpnnqgqprwbxtwsdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502985.1226301-680-242043693399192/AnsiballZ_file.py'
Jan 27 08:36:25 compute-0 sudo[122596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:25 compute-0 flamboyant_swartz[122435]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:36:25 compute-0 flamboyant_swartz[122435]: --> relative data size: 1.0
Jan 27 08:36:25 compute-0 flamboyant_swartz[122435]: --> All data devices are unavailable
Jan 27 08:36:25 compute-0 systemd[1]: libpod-3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6.scope: Deactivated successfully.
Jan 27 08:36:25 compute-0 podman[122416]: 2026-01-27 08:36:25.548585546 +0000 UTC m=+0.988607519 container died 3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:36:25 compute-0 python3.9[122600]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:25 compute-0 sudo[122596]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:25 compute-0 ceph-mon[74357]: pgmap v369: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c21a617df4a114e7a1344c58753efb8c606379a227cd90ac26e230a390b4f3c-merged.mount: Deactivated successfully.
Jan 27 08:36:26 compute-0 podman[122416]: 2026-01-27 08:36:26.1113956 +0000 UTC m=+1.551417613 container remove 3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:36:26 compute-0 systemd[1]: libpod-conmon-3d41f0d2509457c8839697b01a99ecff2b43ea935bfa83997a16ed2108ca20c6.scope: Deactivated successfully.
Jan 27 08:36:26 compute-0 sudo[122209]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:26 compute-0 sudo[122718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:26 compute-0 sudo[122718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:26 compute-0 sudo[122718]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:26 compute-0 sudo[122767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:36:26 compute-0 sudo[122767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:26 compute-0 sudo[122767]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:26 compute-0 sudo[122797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:26 compute-0 sudo[122841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llknjqzzwttbkfccodcrvssmxeznrcrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502985.9309196-704-145421486505285/AnsiballZ_stat.py'
Jan 27 08:36:26 compute-0 sudo[122797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:26 compute-0 sudo[122841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:26 compute-0 sudo[122797]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:26 compute-0 sudo[122846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:36:26 compute-0 sudo[122846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:26 compute-0 python3.9[122845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:26.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:26 compute-0 sudo[122841]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:26.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.660241704 +0000 UTC m=+0.039657208 container create 8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:36:26 compute-0 systemd[1]: Started libpod-conmon-8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4.scope.
Jan 27 08:36:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.645506914 +0000 UTC m=+0.024922438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.746005553 +0000 UTC m=+0.125421077 container init 8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_germain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.752023226 +0000 UTC m=+0.131438730 container start 8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.755617075 +0000 UTC m=+0.135032599 container attach 8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 08:36:26 compute-0 eloquent_germain[122965]: 167 167
Jan 27 08:36:26 compute-0 systemd[1]: libpod-8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4.scope: Deactivated successfully.
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.756375745 +0000 UTC m=+0.135791249 container died 8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_germain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-091f520acfd26f62a039e818afee080a8f034961d06c5b35f1143cd3761e4534-merged.mount: Deactivated successfully.
Jan 27 08:36:26 compute-0 podman[122933]: 2026-01-27 08:36:26.788826196 +0000 UTC m=+0.168241700 container remove 8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:36:26 compute-0 sudo[123014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyebgcslllghfcsudxrcrzxvkagiftyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502985.9309196-704-145421486505285/AnsiballZ_file.py'
Jan 27 08:36:26 compute-0 sudo[123014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:26 compute-0 systemd[1]: libpod-conmon-8a2541533da120932fbba8c213334b489d7e61d5e629318bb37897e9adf0b6d4.scope: Deactivated successfully.
Jan 27 08:36:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:26 compute-0 podman[123029]: 2026-01-27 08:36:26.927106981 +0000 UTC m=+0.034836427 container create 13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:36:26 compute-0 systemd[1]: Started libpod-conmon-13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9.scope.
Jan 27 08:36:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e51878a37bd88b0c6298625d3d0ec2587b1988b510330137470167623ca3241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e51878a37bd88b0c6298625d3d0ec2587b1988b510330137470167623ca3241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e51878a37bd88b0c6298625d3d0ec2587b1988b510330137470167623ca3241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e51878a37bd88b0c6298625d3d0ec2587b1988b510330137470167623ca3241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:27 compute-0 podman[123029]: 2026-01-27 08:36:27.001751998 +0000 UTC m=+0.109481464 container init 13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_morse, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:36:27 compute-0 python3.9[123023]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:27 compute-0 podman[123029]: 2026-01-27 08:36:27.008706397 +0000 UTC m=+0.116435833 container start 13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 27 08:36:27 compute-0 podman[123029]: 2026-01-27 08:36:26.913101691 +0000 UTC m=+0.020831157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:36:27 compute-0 podman[123029]: 2026-01-27 08:36:27.012180661 +0000 UTC m=+0.119910157 container attach 13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:36:27 compute-0 sudo[123014]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:27 compute-0 ceph-mon[74357]: pgmap v370: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:27 compute-0 sudo[123200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrzxkeocchaxnqpuizcrckrhrkqtbra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502987.3015242-740-225489962036478/AnsiballZ_stat.py'
Jan 27 08:36:27 compute-0 sudo[123200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:27 compute-0 determined_morse[123045]: {
Jan 27 08:36:27 compute-0 determined_morse[123045]:     "0": [
Jan 27 08:36:27 compute-0 determined_morse[123045]:         {
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "devices": [
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "/dev/loop3"
Jan 27 08:36:27 compute-0 determined_morse[123045]:             ],
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "lv_name": "ceph_lv0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "lv_size": "7511998464",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "name": "ceph_lv0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "tags": {
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.cluster_name": "ceph",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.crush_device_class": "",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.encrypted": "0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.osd_id": "0",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.type": "block",
Jan 27 08:36:27 compute-0 determined_morse[123045]:                 "ceph.vdo": "0"
Jan 27 08:36:27 compute-0 determined_morse[123045]:             },
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "type": "block",
Jan 27 08:36:27 compute-0 determined_morse[123045]:             "vg_name": "ceph_vg0"
Jan 27 08:36:27 compute-0 determined_morse[123045]:         }
Jan 27 08:36:27 compute-0 determined_morse[123045]:     ]
Jan 27 08:36:27 compute-0 determined_morse[123045]: }
Jan 27 08:36:27 compute-0 systemd[1]: libpod-13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9.scope: Deactivated successfully.
Jan 27 08:36:27 compute-0 conmon[123045]: conmon 13bbb50562d004ce68a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9.scope/container/memory.events
Jan 27 08:36:27 compute-0 podman[123029]: 2026-01-27 08:36:27.781616137 +0000 UTC m=+0.889345623 container died 13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 08:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e51878a37bd88b0c6298625d3d0ec2587b1988b510330137470167623ca3241-merged.mount: Deactivated successfully.
Jan 27 08:36:27 compute-0 python3.9[123202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:27 compute-0 sudo[123200]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:27 compute-0 podman[123029]: 2026-01-27 08:36:27.959103336 +0000 UTC m=+1.066832782 container remove 13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:36:27 compute-0 systemd[1]: libpod-conmon-13bbb50562d004ce68a1746a0b3486998fc80a57fb8e77fa4eee970ef7346dd9.scope: Deactivated successfully.
Jan 27 08:36:27 compute-0 sudo[122846]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:28 compute-0 sudo[123268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:28 compute-0 sudo[123268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:28 compute-0 sudo[123268]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:28 compute-0 sudo[123340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxxdsvrzvmtvlrsmhmldxwdnvmqxyccm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502987.3015242-740-225489962036478/AnsiballZ_file.py'
Jan 27 08:36:28 compute-0 sudo[123340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:28 compute-0 sudo[123306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:36:28 compute-0 sudo[123306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:28 compute-0 sudo[123306]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:28 compute-0 sudo[123350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:28 compute-0 sudo[123350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:28 compute-0 sudo[123350]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:28 compute-0 sudo[123375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:36:28 compute-0 sudo[123375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:28 compute-0 python3.9[123347]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.deuaasm8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:28 compute-0 sudo[123340]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.503639845 +0000 UTC m=+0.034767906 container create 8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:36:28 compute-0 systemd[1]: Started libpod-conmon-8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7.scope.
Jan 27 08:36:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:28.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.569271237 +0000 UTC m=+0.100399318 container init 8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.57528779 +0000 UTC m=+0.106415851 container start 8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.578322572 +0000 UTC m=+0.109450663 container attach 8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:36:28 compute-0 objective_tu[123480]: 167 167
Jan 27 08:36:28 compute-0 systemd[1]: libpod-8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7.scope: Deactivated successfully.
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.5800573 +0000 UTC m=+0.111185361 container died 8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.488149884 +0000 UTC m=+0.019277965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:36:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4320f79e48a14944e8a5db342d78a6527477d45f09465d00133150acb674e69-merged.mount: Deactivated successfully.
Jan 27 08:36:28 compute-0 podman[123464]: 2026-01-27 08:36:28.612293055 +0000 UTC m=+0.143421116 container remove 8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:36:28 compute-0 systemd[1]: libpod-conmon-8e762f56d4fba47e6a8cb98fb40712f7a60f41178c866bbc1c6223ec78c791b7.scope: Deactivated successfully.
Jan 27 08:36:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:28 compute-0 podman[123578]: 2026-01-27 08:36:28.753293925 +0000 UTC m=+0.039440882 container create 5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_goodall, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:36:28 compute-0 systemd[1]: Started libpod-conmon-5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a.scope.
Jan 27 08:36:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaa120872cfe46e2012e4463f2d56e3a063a5b856c2d70caaae7ca19b7004417/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaa120872cfe46e2012e4463f2d56e3a063a5b856c2d70caaae7ca19b7004417/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaa120872cfe46e2012e4463f2d56e3a063a5b856c2d70caaae7ca19b7004417/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaa120872cfe46e2012e4463f2d56e3a063a5b856c2d70caaae7ca19b7004417/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:36:28 compute-0 podman[123578]: 2026-01-27 08:36:28.829208286 +0000 UTC m=+0.115355243 container init 5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:36:28 compute-0 podman[123578]: 2026-01-27 08:36:28.734456293 +0000 UTC m=+0.020603270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:36:28 compute-0 podman[123578]: 2026-01-27 08:36:28.838316513 +0000 UTC m=+0.124463470 container start 5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_goodall, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:36:28 compute-0 podman[123578]: 2026-01-27 08:36:28.841262003 +0000 UTC m=+0.127408990 container attach 5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:36:28 compute-0 sudo[123648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgwskfzffzfbspbkdetkyngbkbukkckl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502988.5706973-776-129267520997026/AnsiballZ_stat.py'
Jan 27 08:36:28 compute-0 sudo[123648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:28 compute-0 ceph-mon[74357]: pgmap v371: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:29 compute-0 python3.9[123652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:29 compute-0 sudo[123648]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:29 compute-0 sudo[123729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmomhumyopsqsxozozstlcmfvciospwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502988.5706973-776-129267520997026/AnsiballZ_file.py'
Jan 27 08:36:29 compute-0 sudo[123729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:29 compute-0 python3.9[123731]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:29 compute-0 sudo[123729]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:29 compute-0 zen_goodall[123619]: {
Jan 27 08:36:29 compute-0 zen_goodall[123619]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:36:29 compute-0 zen_goodall[123619]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:36:29 compute-0 zen_goodall[123619]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:36:29 compute-0 zen_goodall[123619]:         "osd_id": 0,
Jan 27 08:36:29 compute-0 zen_goodall[123619]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:36:29 compute-0 zen_goodall[123619]:         "type": "bluestore"
Jan 27 08:36:29 compute-0 zen_goodall[123619]:     }
Jan 27 08:36:29 compute-0 zen_goodall[123619]: }
Jan 27 08:36:29 compute-0 systemd[1]: libpod-5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a.scope: Deactivated successfully.
Jan 27 08:36:29 compute-0 podman[123578]: 2026-01-27 08:36:29.672774404 +0000 UTC m=+0.958921391 container died 5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaa120872cfe46e2012e4463f2d56e3a063a5b856c2d70caaae7ca19b7004417-merged.mount: Deactivated successfully.
Jan 27 08:36:29 compute-0 podman[123578]: 2026-01-27 08:36:29.731439677 +0000 UTC m=+1.017586634 container remove 5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:36:29 compute-0 systemd[1]: libpod-conmon-5942cb3cc854788806e5bb423bd2887f99d1ab581b45d7ae09138ce4f977721a.scope: Deactivated successfully.
Jan 27 08:36:29 compute-0 sudo[123375]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:36:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:36:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 70e2ec4e-126d-47c9-ac75-830f8e956f34 does not exist
Jan 27 08:36:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev de40fd14-42dd-422e-8195-d1fc86756e60 does not exist
Jan 27 08:36:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d1d8363c-ce36-4dd1-a9b3-d621a8f1d7cf does not exist
Jan 27 08:36:29 compute-0 sudo[123804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:29 compute-0 sudo[123804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:29 compute-0 sudo[123804]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:29 compute-0 sudo[123854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:36:29 compute-0 sudo[123854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:29 compute-0 sudo[123854]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:30 compute-0 sudo[123958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqxqnjunelhunxnaxzphgrwspkuwnga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502989.8368313-815-149812243904846/AnsiballZ_command.py'
Jan 27 08:36:30 compute-0 sudo[123958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:30 compute-0 python3.9[123960]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:36:30 compute-0 sudo[123958]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:30.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:36:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:31 compute-0 sudo[124112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvzyvplbyrisfxufcbitcmnerpjagumd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769502990.7102034-839-27184110082653/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 08:36:31 compute-0 sudo[124112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:31 compute-0 python3[124114]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:36:31 compute-0 sudo[124112]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:31 compute-0 ceph-mon[74357]: pgmap v372: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:31 compute-0 sudo[124264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zblzenocbnrpbffjpuvrogfewnnpcxym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502991.5922625-863-4133429825753/AnsiballZ_stat.py'
Jan 27 08:36:31 compute-0 sudo[124264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:32 compute-0 python3.9[124266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:32 compute-0 sudo[124264]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:32 compute-0 sudo[124342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oswdqhqmwjzcepbrpvabqiboebhpuare ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502991.5922625-863-4133429825753/AnsiballZ_file.py'
Jan 27 08:36:32 compute-0 sudo[124342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 27 08:36:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:32.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 27 08:36:32 compute-0 python3.9[124344]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:32 compute-0 sudo[124342]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:32.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:33 compute-0 sudo[124495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvengjonzsnrytchrpwnpkjvhaibnjyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502992.7764938-899-246842036992327/AnsiballZ_stat.py'
Jan 27 08:36:33 compute-0 sudo[124495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:33 compute-0 python3.9[124497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:33 compute-0 sudo[124495]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:33 compute-0 sudo[124620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgjwtigpvodbrmwcspryzyfkgfdstxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502992.7764938-899-246842036992327/AnsiballZ_copy.py'
Jan 27 08:36:33 compute-0 sudo[124620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:33 compute-0 ceph-mon[74357]: pgmap v373: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:34 compute-0 python3.9[124622]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769502992.7764938-899-246842036992327/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:34 compute-0 sudo[124620]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:34 compute-0 sudo[124772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbqvgqejlqkbtfjmihxhujbzvrazwglb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502994.2280507-944-197987499989692/AnsiballZ_stat.py'
Jan 27 08:36:34 compute-0 sudo[124772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:34.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:34 compute-0 python3.9[124774]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:34 compute-0 sudo[124772]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:35 compute-0 sudo[124850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvodtcpnjeasjeolfrcymxvdkcfgvec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502994.2280507-944-197987499989692/AnsiballZ_file.py'
Jan 27 08:36:35 compute-0 sudo[124850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:35 compute-0 sudo[124854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:35 compute-0 sudo[124854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:35 compute-0 sudo[124854]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:35 compute-0 python3.9[124852]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:35 compute-0 sudo[124879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:35 compute-0 sudo[124850]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:35 compute-0 sudo[124879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:35 compute-0 sudo[124879]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:35 compute-0 sudo[125053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfxdsdcjntdnvxjkbjcpkerdfhtqjkki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502995.5332615-980-70707537264349/AnsiballZ_stat.py'
Jan 27 08:36:35 compute-0 sudo[125053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:35 compute-0 ceph-mon[74357]: pgmap v374: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:36 compute-0 python3.9[125055]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:36 compute-0 sudo[125053]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:36 compute-0 sudo[125131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxjhejtyguvuxtbepdiynpcbpeqykedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502995.5332615-980-70707537264349/AnsiballZ_file.py'
Jan 27 08:36:36 compute-0 sudo[125131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5d8f6f0 =====
Jan 27 08:36:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5d8f6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:36 compute-0 radosgw[92542]: beast: 0x7f84d5d8f6f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:37 compute-0 python3.9[125133]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:37 compute-0 sudo[125131]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:37 compute-0 sudo[125284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwstqfinxorlnajkxpozzxniezizxjgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502997.271346-1016-101082393601835/AnsiballZ_stat.py'
Jan 27 08:36:37 compute-0 sudo[125284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:37 compute-0 python3.9[125286]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:37 compute-0 sudo[125284]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:37 compute-0 ceph-mon[74357]: pgmap v375: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:38 compute-0 sudo[125362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bduijwpqulcegcinnfzilitnasijcguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502997.271346-1016-101082393601835/AnsiballZ_file.py'
Jan 27 08:36:38 compute-0 sudo[125362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:38 compute-0 python3.9[125364]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:38 compute-0 sudo[125362]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:38 compute-0 sudo[125514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvlkjecrmklbolbhttagiuosyotblmlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502998.608534-1055-172692253889168/AnsiballZ_command.py'
Jan 27 08:36:38 compute-0 sudo[125514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5d8f6f0 =====
Jan 27 08:36:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5d8f6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:38 compute-0 radosgw[92542]: beast: 0x7f84d5d8f6f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:39 compute-0 python3.9[125516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:36:39 compute-0 sudo[125514]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:39 compute-0 sudo[125670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlysjdgpcyjvmoomuagkmsgxkgsxjiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769502999.35719-1079-136583261871845/AnsiballZ_blockinfile.py'
Jan 27 08:36:39 compute-0 sudo[125670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:39 compute-0 python3.9[125672]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:39 compute-0 sudo[125670]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:39 compute-0 ceph-mon[74357]: pgmap v376: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:40 compute-0 sudo[125822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjmspetzawktcksiuyxiqjpaegzswxrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503000.288481-1106-156163296347671/AnsiballZ_file.py'
Jan 27 08:36:40 compute-0 sudo[125822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:40 compute-0 python3.9[125824]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:40 compute-0 sudo[125822]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:36:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:40.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:36:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:40.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:41 compute-0 sudo[125975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbfzmngyflixccbnyypskjxqxcdibwiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503000.8958914-1106-204330935368719/AnsiballZ_file.py'
Jan 27 08:36:41 compute-0 sudo[125975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:41 compute-0 python3.9[125977]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:41 compute-0 sudo[125975]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:42 compute-0 ceph-mon[74357]: pgmap v377: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:42 compute-0 sudo[126127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqfvkuldpsoravtywsqljajrffkfnnrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503001.7302117-1151-85709920794939/AnsiballZ_mount.py'
Jan 27 08:36:42 compute-0 sudo[126127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:42 compute-0 python3.9[126129]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:36:42 compute-0 sudo[126127]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:42 compute-0 sudo[126279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozjswfqcpijxwavbshusvoumscmskphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503002.602204-1151-256076130757851/AnsiballZ_mount.py'
Jan 27 08:36:42 compute-0 sudo[126279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:42.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:42.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:43 compute-0 python3.9[126281]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:36:43 compute-0 sudo[126279]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:43 compute-0 sshd-session[118139]: Connection closed by 192.168.122.30 port 41778
Jan 27 08:36:43 compute-0 sshd-session[118136]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:36:43 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 27 08:36:43 compute-0 systemd[1]: session-40.scope: Consumed 29.447s CPU time.
Jan 27 08:36:43 compute-0 systemd-logind[799]: Session 40 logged out. Waiting for processes to exit.
Jan 27 08:36:43 compute-0 systemd-logind[799]: Removed session 40.
Jan 27 08:36:44 compute-0 ceph-mon[74357]: pgmap v378: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:44.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:44.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:46 compute-0 ceph-mon[74357]: pgmap v379: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:36:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:36:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:46.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:48 compute-0 ceph-mon[74357]: pgmap v380: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:48 compute-0 sshd-session[126309]: Accepted publickey for zuul from 192.168.122.30 port 56306 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:36:48 compute-0 systemd-logind[799]: New session 41 of user zuul.
Jan 27 08:36:48 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 27 08:36:48 compute-0 sshd-session[126309]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:36:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:48.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:49 compute-0 sudo[126463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znnvwhxzgbulopvptemprrlqlltsifna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503008.9554365-23-268770511481021/AnsiballZ_tempfile.py'
Jan 27 08:36:49 compute-0 sudo[126463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:49 compute-0 python3.9[126465]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 08:36:49 compute-0 sudo[126463]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:50 compute-0 sudo[126615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wylhpmgzgffpelxthzguremelygsjyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503009.8964665-59-169064987026341/AnsiballZ_stat.py'
Jan 27 08:36:50 compute-0 sudo[126615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:50 compute-0 ceph-mon[74357]: pgmap v381: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:50 compute-0 python3.9[126617]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:36:50 compute-0 sudo[126615]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:50.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:50.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:51 compute-0 sudo[126770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-votflkmvijpxirhythptnsirmilkgqyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503010.9195464-83-212836729262165/AnsiballZ_slurp.py'
Jan 27 08:36:51 compute-0 sudo[126770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:51 compute-0 python3.9[126772]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 27 08:36:51 compute-0 sudo[126770]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:52 compute-0 sudo[126922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmucrunyjnvgjxncdnbkbafagqftutrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503011.8248293-107-145778112051026/AnsiballZ_stat.py'
Jan 27 08:36:52 compute-0 sudo[126922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:52 compute-0 python3.9[126924]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.l291w09v follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:36:52 compute-0 sudo[126922]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:52 compute-0 ceph-mon[74357]: pgmap v382: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:52 compute-0 sudo[127047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqaxbmqbrefwehyftyxfkjrvjaieyzwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503011.8248293-107-145778112051026/AnsiballZ_copy.py'
Jan 27 08:36:52 compute-0 sudo[127047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:52.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:52.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:53 compute-0 python3.9[127049]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.l291w09v mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503011.8248293-107-145778112051026/.source.l291w09v _original_basename=.7_p4xt6s follow=False checksum=b65ac71155ffcea28b097ac173c7ae9570fb76ea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:53 compute-0 sudo[127047]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:53 compute-0 sudo[127200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmdjenrfnbtfeuqvvounxuzawiqcbonm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503013.1858912-152-72704391981876/AnsiballZ_setup.py'
Jan 27 08:36:53 compute-0 sudo[127200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:54 compute-0 python3.9[127202]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:36:54 compute-0 sudo[127200]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:54 compute-0 ceph-mon[74357]: pgmap v383: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:54 compute-0 sudo[127352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipyiccuaqauhhsmagojoruhestcyxkrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503014.342355-177-241169900090208/AnsiballZ_blockinfile.py'
Jan 27 08:36:54 compute-0 sudo[127352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:54 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 08:36:54 compute-0 python3.9[127354]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZQYFLJNCSqPubvkgV+mSAWsKyHEn5zEd8PjcBARYbKex1zE5KP5kYHD5RqkMSEGbaLqB01216SE5OLsJdp7zDtEvYvOiuzSdilxqK8FneqyHJuL3BVd0SK0Ou88elrYxCMog6DNui3gSOw4hb71J9rM8CeUo3ou61yWHQq1IuGW/eZdsN2zZRhtvYy6TmeozTA9iybgebjYHIk98nQOhocTi1H5QmICMFzzGX0A74QafSrIBBed8sHQ3ElScEdK/RfmmsHGKwkVkuEP34cvD+Agd8VSaQ5cSYjtTBzgNWSxd3MmLtX7xbx02sW6AixTXdc0Rg6z0wnrM5Rw2ACynusV8xc5JPUwMcPxzOVKVPuO4PahYvMmYIq/5Cn6rSakk4KiSkeHr5QU7XTn/b6Vg1UtxU4m2FUpqnuF8kn6VN4evt7snG9oN8IUBsFoTvviMHNT0oSz3yBCp3CQ72GzJJTt2p6B3fAJRuil9lWxe/Q6nzOAkcSes+tI54/Yx3AJ0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIU4q8/exihWO+LCEgVZGFOu7nizMQ7PRBYf9UmhVfWu
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHy9oQDK7bN+bEbzpMT6Qkq1ZjkuLkqy7zdXiLz4z1/0zlHVkEt5G4ADDr6nb9SxllvpTitSX4S/ovd8Jbtwv6w=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeZVzJUmBr8rscwr/tyZkmyMtp8DsjVmT1Go2ik8PcNvKzMeefyxTZ3Uxpz1L0QWjiy3C+1IbTOPHtmNePnEYO6JumkmaqkVrXkXV0/NVfC4CBUpsyDRaGJ+STPhy6KJ71JuESt2ey61/P5BgKdolhn/ypkCXuLPOeGUOU/zr8Z6r0kUHQTMxbo22ElMOZ7E9WcU+1dhg1QOmJNjeRPf5zA6aWl70dc4DQz/MCGoEugK3/BHxi4LTHbTmqaAyxWPk5eIUGJ9ZhO368KneUkjHMCQJPuxrk5Nfi78IKmoXabmrRz+toYCoJZwHgN312halgSgYBiNObCY2wFuIa/1yyzH7fZ8t9izWftTnv6e6rIq1pwdRfGDEFMS3qfGJFbpVZd0vZRLxZcCSRj6nSGTsgYxrnKxiXN9DlnYL4mL8IobHYLgYvjEJRxSFNcvJeEec2dHf39g94gYUp5A6g0JSUQua9uGuUw3u2hjuw/hqWJ1+Qtqz+v6aAr5vDKoXnDnk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINsL+jpg9IqB3QHcoTIKXMaJ36zCdaJtKTD57FBkukfF
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKI5+7ocma2vDS2iiMtTo2VOfmNxAY3b9rJYJIYe1s2vpy4//aKnloQB1/36D/Ob9gEKU2cs2feFXzaHWoMb8Fw=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXsXvncMJ0UzA2kZWT6PmXqnKs4jKM0Sr01zB/XUpOk9hr3myA119m0OywalXpo8EKjtKewhszXHOe+836O9Oaro5nUthxWGueffDrPmvv3U+olo/D4WZHmtWqYMeuY9WZQYg3SkzROARzA5D1LBzcnj89JWIK3wozoImndBu1dy4wvoUl5pvJJb/8wpn6MW1qztsckSYeFyxKIjKUInt63Co2RDrpcNLx0ym4RH3nR/eak0lQJzFqg7dNSRKSnyq2KkAoqgXxqlBeMV3zXbvoM4T9/RDQNHhBTvj4Sz1gx0h6tQwZD5xvHsTUTpb8IY/WjYRb5bwfCqaY6GkxPXGgUZtOiQpgqVgIm/A0s6yMkCX+vgg7T5bDe42bXQ3T0yzYCXqXqKr7283USNHtAxvyS7HJ4+1jQooCUK7zLgzrxvzsa1Jbm3fD/DPpub8RUl4IHHsAn4snBFk0i5918tARMsoCVGeSSsIUpm0Lb6oP25Svt9veUbsUUIyBnZ9C9bk=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBAV6NbK6BWJ7Z6z/q1/WahjUGnZCfaJADbVIPDztAu
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoQeb8dUBWroibOpcXZLZW2jU7oc/D85IJfotnbJ13c+NsTa9bvtSQuFOZBSiJxFZBz8g5tHP1dX2zBDygyl0w=
                                              create=True mode=0644 path=/tmp/ansible.l291w09v state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:54.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:54 compute-0 sudo[127352]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:55 compute-0 sudo[127382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:55 compute-0 sudo[127382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:55 compute-0 sudo[127382]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:55 compute-0 sudo[127415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:36:55 compute-0 sudo[127415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:36:55 compute-0 sudo[127415]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:55 compute-0 sudo[127557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dikamujqzdupaaojrskfoxjkgbdehdjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503015.378024-201-228991068034757/AnsiballZ_command.py'
Jan 27 08:36:55 compute-0 sudo[127557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:56 compute-0 python3.9[127559]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.l291w09v' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:36:56 compute-0 sudo[127557]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:56 compute-0 sudo[127711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpctyxmgkhxppgsmtlgqtpbfknadzdks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503016.3084617-225-74764169872729/AnsiballZ_file.py'
Jan 27 08:36:56 compute-0 sudo[127711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:36:56 compute-0 ceph-mon[74357]: pgmap v384: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:56.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:36:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:56 compute-0 python3.9[127713]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.l291w09v state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:36:56 compute-0 sudo[127711]: pam_unix(sudo:session): session closed for user root
Jan 27 08:36:57 compute-0 sshd-session[126312]: Connection closed by 192.168.122.30 port 56306
Jan 27 08:36:57 compute-0 sshd-session[126309]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:36:57 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 27 08:36:57 compute-0 systemd[1]: session-41.scope: Consumed 4.707s CPU time.
Jan 27 08:36:57 compute-0 systemd-logind[799]: Session 41 logged out. Waiting for processes to exit.
Jan 27 08:36:57 compute-0 systemd-logind[799]: Removed session 41.
Jan 27 08:36:57 compute-0 sshd-session[71341]: Received disconnect from 38.102.83.162 port 43894:11: disconnected by user
Jan 27 08:36:57 compute-0 sshd-session[71341]: Disconnected from user zuul 38.102.83.162 port 43894
Jan 27 08:36:57 compute-0 sshd-session[71338]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:36:57 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 27 08:36:57 compute-0 systemd[1]: session-18.scope: Consumed 1min 15.040s CPU time.
Jan 27 08:36:57 compute-0 systemd-logind[799]: Session 18 logged out. Waiting for processes to exit.
Jan 27 08:36:57 compute-0 systemd-logind[799]: Removed session 18.
Jan 27 08:36:58 compute-0 ceph-mon[74357]: pgmap v385: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:36:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:36:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:36:58.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:36:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:36:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:36:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:36:58.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:00 compute-0 ceph-mon[74357]: pgmap v386: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:00.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:00.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:02 compute-0 ceph-mon[74357]: pgmap v387: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:02.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:03 compute-0 sshd-session[127742]: Accepted publickey for zuul from 192.168.122.30 port 38422 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:37:03 compute-0 systemd-logind[799]: New session 42 of user zuul.
Jan 27 08:37:03 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 27 08:37:03 compute-0 sshd-session[127742]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:37:04 compute-0 python3.9[127895]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:37:04 compute-0 ceph-mon[74357]: pgmap v388: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:04.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:05 compute-0 sudo[128050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tstugrdlghhwzwldizjoqmveimyzbaci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503024.9505675-56-193050927758033/AnsiballZ_systemd.py'
Jan 27 08:37:05 compute-0 sudo[128050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:05 compute-0 python3.9[128052]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 08:37:05 compute-0 sudo[128050]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:06 compute-0 sudo[128204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxfwsvttpcamyvbbddeoltyboxvxnifs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503026.1877701-80-101386812504655/AnsiballZ_systemd.py'
Jan 27 08:37:06 compute-0 sudo[128204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:06 compute-0 ceph-mon[74357]: pgmap v389: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:06 compute-0 python3.9[128206]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:37:06 compute-0 sudo[128204]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:06.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:07 compute-0 sudo[128358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlygcmxzwcswvtublpvqgbbjdehcaovn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503027.1772604-107-272639564987138/AnsiballZ_command.py'
Jan 27 08:37:07 compute-0 sudo[128358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:07 compute-0 python3.9[128360]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:37:07 compute-0 sudo[128358]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:08 compute-0 sudo[128511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomcacsrgsdmdichyakurmsazrczpvgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503028.2206972-131-139909038175430/AnsiballZ_stat.py'
Jan 27 08:37:08 compute-0 sudo[128511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:08 compute-0 ceph-mon[74357]: pgmap v390: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:08 compute-0 python3.9[128513]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:37:08 compute-0 sudo[128511]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:08.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:08.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:09 compute-0 sudo[128664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzqerebjpijqjfpefjvxwejylnnmyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503029.193613-158-121667019755416/AnsiballZ_file.py'
Jan 27 08:37:09 compute-0 sudo[128664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:09 compute-0 python3.9[128666]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:09 compute-0 sudo[128664]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:10 compute-0 sshd-session[127745]: Connection closed by 192.168.122.30 port 38422
Jan 27 08:37:10 compute-0 sshd-session[127742]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:37:10 compute-0 systemd-logind[799]: Session 42 logged out. Waiting for processes to exit.
Jan 27 08:37:10 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 27 08:37:10 compute-0 systemd[1]: session-42.scope: Consumed 3.629s CPU time.
Jan 27 08:37:10 compute-0 systemd-logind[799]: Removed session 42.
Jan 27 08:37:10 compute-0 ceph-mon[74357]: pgmap v391: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:10.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:10.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:12 compute-0 ceph-mon[74357]: pgmap v392: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:12.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:14 compute-0 ceph-mon[74357]: pgmap v393: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:14.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:14.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:37:14
Jan 27 08:37:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:37:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:37:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', 'backups', 'volumes']
Jan 27 08:37:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:37:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:37:15 compute-0 sshd-session[128694]: Accepted publickey for zuul from 192.168.122.30 port 39190 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:37:15 compute-0 systemd-logind[799]: New session 43 of user zuul.
Jan 27 08:37:15 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 27 08:37:15 compute-0 sshd-session[128694]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:37:15 compute-0 sudo[128750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:15 compute-0 sudo[128750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:15 compute-0 sudo[128750]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:15 compute-0 sudo[128775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:15 compute-0 sudo[128775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:15 compute-0 sudo[128775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:16 compute-0 ceph-mon[74357]: pgmap v394: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:16 compute-0 python3.9[128897]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:37:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.956787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503036956828, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 760, "num_deletes": 250, "total_data_size": 1110202, "memory_usage": 1132968, "flush_reason": "Manual Compaction"}
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503036962493, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 716692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9978, "largest_seqno": 10737, "table_properties": {"data_size": 713398, "index_size": 1138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8340, "raw_average_key_size": 19, "raw_value_size": 706520, "raw_average_value_size": 1678, "num_data_blocks": 50, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502974, "oldest_key_time": 1769502974, "file_creation_time": 1769503036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5750 microseconds, and 3401 cpu microseconds.
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.962542) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 716692 bytes OK
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.962559) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.963814) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.963829) EVENT_LOG_v1 {"time_micros": 1769503036963824, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.963845) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1106456, prev total WAL file size 1106456, number of live WAL files 2.
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.964430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(699KB)], [23(9503KB)]
Jan 27 08:37:16 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503036964461, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10448362, "oldest_snapshot_seqno": -1}
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3773 keys, 7750735 bytes, temperature: kUnknown
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503037002768, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7750735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7722686, "index_size": 17507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 91583, "raw_average_key_size": 24, "raw_value_size": 7651679, "raw_average_value_size": 2028, "num_data_blocks": 765, "num_entries": 3773, "num_filter_entries": 3773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.003482) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7750735 bytes
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.004925) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.6 rd, 200.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.3 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(25.4) write-amplify(10.8) OK, records in: 4263, records dropped: 490 output_compression: NoCompression
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.004969) EVENT_LOG_v1 {"time_micros": 1769503037004950, "job": 8, "event": "compaction_finished", "compaction_time_micros": 38758, "compaction_time_cpu_micros": 17987, "output_level": 6, "num_output_files": 1, "total_output_size": 7750735, "num_input_records": 4263, "num_output_records": 3773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503037006237, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503037011306, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:16.964389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.011614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.011622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.011624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.011625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:37:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:37:17.011627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:37:17 compute-0 sudo[129052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidpnzkoopodbxxvvamvmhdziuywjeod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503036.7849889-62-245030905300716/AnsiballZ_setup.py'
Jan 27 08:37:17 compute-0 sudo[129052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:17 compute-0 python3.9[129054]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:37:17 compute-0 sudo[129052]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:18 compute-0 sudo[129136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukoieslhbkjvylxqfpvuopvnbbwlxros ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503036.7849889-62-245030905300716/AnsiballZ_dnf.py'
Jan 27 08:37:18 compute-0 sudo[129136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:18 compute-0 ceph-mon[74357]: pgmap v395: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:18 compute-0 python3.9[129138]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:37:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:18.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:19 compute-0 sudo[129136]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:20 compute-0 ceph-mon[74357]: pgmap v396: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:20 compute-0 python3.9[129290]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:37:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:21 compute-0 python3.9[129442]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:37:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:22 compute-0 ceph-mon[74357]: pgmap v397: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:22 compute-0 python3.9[129592]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:37:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:22.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:23 compute-0 python3.9[129743]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:37:23 compute-0 sshd-session[128697]: Connection closed by 192.168.122.30 port 39190
Jan 27 08:37:23 compute-0 sshd-session[128694]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:37:23 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 27 08:37:23 compute-0 systemd[1]: session-43.scope: Consumed 5.645s CPU time.
Jan 27 08:37:23 compute-0 systemd-logind[799]: Session 43 logged out. Waiting for processes to exit.
Jan 27 08:37:23 compute-0 systemd-logind[799]: Removed session 43.
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:37:24 compute-0 ceph-mon[74357]: pgmap v398: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:24.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:37:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2381 writes, 10K keys, 2381 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2381 writes, 2381 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2381 writes, 10K keys, 2381 commit groups, 1.0 writes per commit group, ingest: 13.94 MB, 0.02 MB/s
                                           Interval WAL: 2381 writes, 2381 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.0      0.11              0.03         4    0.028       0      0       0.0       0.0
                                             L6      1/0    7.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    104.8     89.3      0.27              0.06         3    0.091     12K   1303       0.0       0.0
                                            Sum      1/0    7.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     74.5     93.8      0.38              0.09         7    0.055     12K   1303       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     75.6     95.0      0.38              0.09         6    0.063     12K   1303       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    104.8     89.3      0.27              0.06         3    0.091     12K   1303       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    110.0      0.11              0.03         3    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f59eb431f0#2 capacity: 308.00 MB usage: 1.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(55,909.28 KB,0.288302%) FilterBlock(8,41.73 KB,0.0132325%) IndexBlock(8,91.95 KB,0.0291552%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 08:37:26 compute-0 ceph-mon[74357]: pgmap v399: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:28 compute-0 ceph-mon[74357]: pgmap v400: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:29 compute-0 sshd-session[129771]: Accepted publickey for zuul from 192.168.122.30 port 57524 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:37:29 compute-0 systemd-logind[799]: New session 44 of user zuul.
Jan 27 08:37:29 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 27 08:37:29 compute-0 sshd-session[129771]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:37:30 compute-0 sudo[129925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:30 compute-0 sudo[129925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:30 compute-0 sudo[129925]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:30 compute-0 python3.9[129924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:37:30 compute-0 sudo[129950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:37:30 compute-0 ceph-mon[74357]: pgmap v401: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:30 compute-0 sudo[129950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:30 compute-0 sudo[129950]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:30 compute-0 sudo[129979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:30 compute-0 sudo[129979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:30 compute-0 sudo[129979]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:30 compute-0 sudo[130004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:37:30 compute-0 sudo[130004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:30 compute-0 sudo[130004]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:37:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:37:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:37:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:37:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:37:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:37:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:30 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6f4f7bdb-eee5-4d31-b407-49136034687e does not exist
Jan 27 08:37:30 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7c9e6182-9340-4774-8220-de5008168645 does not exist
Jan 27 08:37:30 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 970cb0e3-2466-4a3d-b2b7-549867347727 does not exist
Jan 27 08:37:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:37:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:37:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:37:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:37:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:37:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:37:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:30.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:31 compute-0 sudo[130084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:31 compute-0 sudo[130084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:31 compute-0 sudo[130084]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:31 compute-0 sudo[130109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:37:31 compute-0 sudo[130109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:31 compute-0 sudo[130109]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:31 compute-0 sudo[130135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:31 compute-0 sudo[130135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:31 compute-0 sudo[130135]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:31 compute-0 sudo[130160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:37:31 compute-0 sudo[130160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:37:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:37:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:37:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:37:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:37:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.502930264 +0000 UTC m=+0.041480027 container create 6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_archimedes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:37:31 compute-0 systemd[1]: Started libpod-conmon-6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e.scope.
Jan 27 08:37:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.48470809 +0000 UTC m=+0.023257883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.588015544 +0000 UTC m=+0.126565337 container init 6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_archimedes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.598696063 +0000 UTC m=+0.137245826 container start 6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.601403507 +0000 UTC m=+0.139953330 container attach 6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 08:37:31 compute-0 friendly_archimedes[130290]: 167 167
Jan 27 08:37:31 compute-0 systemd[1]: libpod-6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e.scope: Deactivated successfully.
Jan 27 08:37:31 compute-0 conmon[130290]: conmon 6a2e24558dc5a1b57c2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e.scope/container/memory.events
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.609710962 +0000 UTC m=+0.148260715 container died 6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cd89a0e933f3e0995f1f1779e2edd4516b7130b27040821b2c1324ad811fdfb-merged.mount: Deactivated successfully.
Jan 27 08:37:31 compute-0 podman[130274]: 2026-01-27 08:37:31.650574002 +0000 UTC m=+0.189123755 container remove 6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:37:31 compute-0 systemd[1]: libpod-conmon-6a2e24558dc5a1b57c2f02ae5d37e92b722f30f4680c10279001353994b3b01e.scope: Deactivated successfully.
Jan 27 08:37:31 compute-0 podman[130362]: 2026-01-27 08:37:31.800223283 +0000 UTC m=+0.050263725 container create 719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:31 compute-0 sudo[130402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmzlsmiuheoqwjpnmeimjizvjyqvpty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503051.3696446-110-197569890902721/AnsiballZ_file.py'
Jan 27 08:37:31 compute-0 sudo[130402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:31 compute-0 systemd[1]: Started libpod-conmon-719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1.scope.
Jan 27 08:37:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:37:31 compute-0 podman[130362]: 2026-01-27 08:37:31.780417976 +0000 UTC m=+0.030458458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54541dee85125287f557b734b03d63fda628f7f71e98fbf334f4b529324d62da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54541dee85125287f557b734b03d63fda628f7f71e98fbf334f4b529324d62da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54541dee85125287f557b734b03d63fda628f7f71e98fbf334f4b529324d62da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54541dee85125287f557b734b03d63fda628f7f71e98fbf334f4b529324d62da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54541dee85125287f557b734b03d63fda628f7f71e98fbf334f4b529324d62da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:31 compute-0 podman[130362]: 2026-01-27 08:37:31.894103332 +0000 UTC m=+0.144143754 container init 719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:37:31 compute-0 podman[130362]: 2026-01-27 08:37:31.901867133 +0000 UTC m=+0.151907555 container start 719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:31 compute-0 podman[130362]: 2026-01-27 08:37:31.904616538 +0000 UTC m=+0.154656960 container attach 719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:37:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:32 compute-0 python3.9[130404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:32 compute-0 sudo[130402]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:32 compute-0 ceph-mon[74357]: pgmap v402: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:32 compute-0 sudo[130561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymkwokutywypmocxyxhmtrpzppwapxge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503052.1874192-110-273425504948838/AnsiballZ_file.py'
Jan 27 08:37:32 compute-0 sudo[130561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:32 compute-0 clever_kirch[130407]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:37:32 compute-0 clever_kirch[130407]: --> relative data size: 1.0
Jan 27 08:37:32 compute-0 clever_kirch[130407]: --> All data devices are unavailable
Jan 27 08:37:32 compute-0 python3.9[130563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:32 compute-0 systemd[1]: libpod-719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1.scope: Deactivated successfully.
Jan 27 08:37:32 compute-0 podman[130362]: 2026-01-27 08:37:32.706245267 +0000 UTC m=+0.956285739 container died 719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:37:32 compute-0 sudo[130561]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-54541dee85125287f557b734b03d63fda628f7f71e98fbf334f4b529324d62da-merged.mount: Deactivated successfully.
Jan 27 08:37:32 compute-0 podman[130362]: 2026-01-27 08:37:32.756800009 +0000 UTC m=+1.006840421 container remove 719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:37:32 compute-0 systemd[1]: libpod-conmon-719f86da2abe776d989bf0886fe0275255b3d9e138c94092a457c479cae5d5d1.scope: Deactivated successfully.
Jan 27 08:37:32 compute-0 sudo[130160]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:32 compute-0 sudo[130609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:32 compute-0 sudo[130609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:32 compute-0 sudo[130609]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:32 compute-0 sudo[130643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:37:32 compute-0 sudo[130643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:32 compute-0 sudo[130643]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:32.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:32 compute-0 sudo[130695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:32 compute-0 sudo[130695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:32 compute-0 sudo[130695]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:33 compute-0 sudo[130736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:37:33 compute-0 sudo[130736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:33 compute-0 sudo[130890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmpaxckwabfblvyywkinjzwdxmutorv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503052.8896608-155-264417339954317/AnsiballZ_stat.py'
Jan 27 08:37:33 compute-0 sudo[130890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.312664116 +0000 UTC m=+0.020582379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:37:33 compute-0 python3.9[130892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:33 compute-0 sudo[130890]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.542672199 +0000 UTC m=+0.250590452 container create a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:37:33 compute-0 systemd[1]: Started libpod-conmon-a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582.scope.
Jan 27 08:37:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.76740284 +0000 UTC m=+0.475321083 container init a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.774609625 +0000 UTC m=+0.482527868 container start a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.778212553 +0000 UTC m=+0.486130816 container attach a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:37:33 compute-0 recursing_ishizaka[130942]: 167 167
Jan 27 08:37:33 compute-0 systemd[1]: libpod-a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582.scope: Deactivated successfully.
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.780986168 +0000 UTC m=+0.488904411 container died a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da1ed9eb514f11386726db0fa6adcb499fe6d5f6978aed572a31598172fa5dd-merged.mount: Deactivated successfully.
Jan 27 08:37:33 compute-0 podman[130850]: 2026-01-27 08:37:33.823310638 +0000 UTC m=+0.531228881 container remove a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:37:33 compute-0 systemd[1]: libpod-conmon-a2ae8bc9595ca4ac21fd9ac998c093d22ae413ed61756905ee32e271dfb9c582.scope: Deactivated successfully.
Jan 27 08:37:33 compute-0 podman[131000]: 2026-01-27 08:37:33.978179901 +0000 UTC m=+0.050835561 container create d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:37:34 compute-0 systemd[1]: Started libpod-conmon-d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45.scope.
Jan 27 08:37:34 compute-0 sudo[131054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqtybpggdwakxoiltpwukfpaqsmodsbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503052.8896608-155-264417339954317/AnsiballZ_copy.py'
Jan 27 08:37:34 compute-0 sudo[131054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe85402f6f629218416f693249d4e1e124a73e750663e2ed89162470ac14f870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe85402f6f629218416f693249d4e1e124a73e750663e2ed89162470ac14f870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe85402f6f629218416f693249d4e1e124a73e750663e2ed89162470ac14f870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe85402f6f629218416f693249d4e1e124a73e750663e2ed89162470ac14f870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:34 compute-0 podman[131000]: 2026-01-27 08:37:33.959619387 +0000 UTC m=+0.032275057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:37:34 compute-0 podman[131000]: 2026-01-27 08:37:34.059410246 +0000 UTC m=+0.132065916 container init d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:37:34 compute-0 podman[131000]: 2026-01-27 08:37:34.068693538 +0000 UTC m=+0.141349208 container start d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:37:34 compute-0 podman[131000]: 2026-01-27 08:37:34.071585496 +0000 UTC m=+0.144241156 container attach d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:37:34 compute-0 python3.9[131058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503052.8896608-155-264417339954317/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=35aa45839db6b3007c7b28dd758e5ba6688ffd40 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:34 compute-0 sudo[131054]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:34 compute-0 ceph-mon[74357]: pgmap v403: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:34 compute-0 sudo[131211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgqumdbwtbdmixomemqmzavebzpviqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503054.3698468-155-73121982740235/AnsiballZ_stat.py'
Jan 27 08:37:34 compute-0 sudo[131211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:34 compute-0 python3.9[131213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:34 compute-0 sudo[131211]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:34 compute-0 elastic_shamir[131056]: {
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:     "0": [
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:         {
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "devices": [
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "/dev/loop3"
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             ],
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "lv_name": "ceph_lv0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "lv_size": "7511998464",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "name": "ceph_lv0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "tags": {
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.cluster_name": "ceph",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.crush_device_class": "",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.encrypted": "0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.osd_id": "0",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.type": "block",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:                 "ceph.vdo": "0"
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             },
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "type": "block",
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:             "vg_name": "ceph_vg0"
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:         }
Jan 27 08:37:34 compute-0 elastic_shamir[131056]:     ]
Jan 27 08:37:34 compute-0 elastic_shamir[131056]: }
Jan 27 08:37:34 compute-0 systemd[1]: libpod-d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45.scope: Deactivated successfully.
Jan 27 08:37:34 compute-0 podman[131000]: 2026-01-27 08:37:34.816451464 +0000 UTC m=+0.889107124 container died d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe85402f6f629218416f693249d4e1e124a73e750663e2ed89162470ac14f870-merged.mount: Deactivated successfully.
Jan 27 08:37:34 compute-0 podman[131000]: 2026-01-27 08:37:34.867814698 +0000 UTC m=+0.940470348 container remove d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:37:34 compute-0 systemd[1]: libpod-conmon-d9b6bc93b3ac9296c40072f297261ad1c95f1b6887194932a9b0498d00e57e45.scope: Deactivated successfully.
Jan 27 08:37:34 compute-0 sudo[130736]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:34 compute-0 sudo[131271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:34 compute-0 sudo[131271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:34 compute-0 sudo[131271]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:35 compute-0 sudo[131326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:37:35 compute-0 sudo[131326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:35 compute-0 sudo[131326]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:35 compute-0 sudo[131366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:35 compute-0 sudo[131366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:35 compute-0 sudo[131366]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:35 compute-0 sudo[131439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umavgnyvgovlmekrqfmfhakqviqqamqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503054.3698468-155-73121982740235/AnsiballZ_copy.py'
Jan 27 08:37:35 compute-0 sudo[131439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:35 compute-0 sudo[131416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:37:35 compute-0 sudo[131416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:35 compute-0 python3.9[131452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503054.3698468-155-73121982740235/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=591e2c2a412e0c6709814eb0688b6a202ee6a8da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:35 compute-0 sudo[131439]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.453055814 +0000 UTC m=+0.036028489 container create b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:37:35 compute-0 systemd[1]: Started libpod-conmon-b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2.scope.
Jan 27 08:37:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.514469731 +0000 UTC m=+0.097442436 container init b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.52140472 +0000 UTC m=+0.104377395 container start b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.524213226 +0000 UTC m=+0.107185901 container attach b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:37:35 compute-0 funny_mirzakhani[131555]: 167 167
Jan 27 08:37:35 compute-0 systemd[1]: libpod-b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2.scope: Deactivated successfully.
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.527139255 +0000 UTC m=+0.110111930 container died b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.436905686 +0000 UTC m=+0.019878381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17f7fe6d354ba0f626f6641c548ef50ed53242d7011d27de687562b5f2cdb4c-merged.mount: Deactivated successfully.
Jan 27 08:37:35 compute-0 podman[131515]: 2026-01-27 08:37:35.560048648 +0000 UTC m=+0.143021323 container remove b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 27 08:37:35 compute-0 systemd[1]: libpod-conmon-b8f0f23ef56246b5311b2bc39de864f9a5967df6521850f239b3af316e28fec2.scope: Deactivated successfully.
Jan 27 08:37:35 compute-0 sudo[131600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:35 compute-0 sudo[131600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:35 compute-0 sudo[131600]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:35 compute-0 sudo[131650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:35 compute-0 sudo[131650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:35 compute-0 sudo[131650]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:35 compute-0 podman[131686]: 2026-01-27 08:37:35.700074329 +0000 UTC m=+0.037370985 container create 05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:37:35 compute-0 systemd[1]: Started libpod-conmon-05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17.scope.
Jan 27 08:37:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:37:35 compute-0 sudo[131749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sepwltwfnwlvrmamixyywviyiilsybrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503055.4777033-155-28675475879247/AnsiballZ_stat.py'
Jan 27 08:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6892c6bfdad9aa24325b36d47ebc0b76dac500b6f92b5bab877ea221cc4c6bb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:35 compute-0 sudo[131749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6892c6bfdad9aa24325b36d47ebc0b76dac500b6f92b5bab877ea221cc4c6bb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6892c6bfdad9aa24325b36d47ebc0b76dac500b6f92b5bab877ea221cc4c6bb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6892c6bfdad9aa24325b36d47ebc0b76dac500b6f92b5bab877ea221cc4c6bb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:35 compute-0 podman[131686]: 2026-01-27 08:37:35.763734747 +0000 UTC m=+0.101031423 container init 05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:37:35 compute-0 podman[131686]: 2026-01-27 08:37:35.773588905 +0000 UTC m=+0.110885561 container start 05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:37:35 compute-0 podman[131686]: 2026-01-27 08:37:35.776457863 +0000 UTC m=+0.113754519 container attach 05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:35 compute-0 podman[131686]: 2026-01-27 08:37:35.683548291 +0000 UTC m=+0.020844947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:37:35 compute-0 python3.9[131752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:35 compute-0 sudo[131749]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:36 compute-0 sudo[131875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccjwtqzwyhzfpochrvdqgrzvaslkbjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503055.4777033-155-28675475879247/AnsiballZ_copy.py'
Jan 27 08:37:36 compute-0 sudo[131875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:36 compute-0 python3.9[131877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503055.4777033-155-28675475879247/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=01fc01132141008026d985668f324ced22e793eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:36 compute-0 sudo[131875]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]: {
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:         "osd_id": 0,
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:         "type": "bluestore"
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]:     }
Jan 27 08:37:36 compute-0 thirsty_mcclintock[131747]: }
Jan 27 08:37:36 compute-0 systemd[1]: libpod-05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17.scope: Deactivated successfully.
Jan 27 08:37:36 compute-0 podman[131686]: 2026-01-27 08:37:36.593014726 +0000 UTC m=+0.930311382 container died 05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6892c6bfdad9aa24325b36d47ebc0b76dac500b6f92b5bab877ea221cc4c6bb5-merged.mount: Deactivated successfully.
Jan 27 08:37:36 compute-0 podman[131686]: 2026-01-27 08:37:36.643756183 +0000 UTC m=+0.981052839 container remove 05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:37:36 compute-0 systemd[1]: libpod-conmon-05b32c4b8c15d8017afd9953a25de6dc8bbb8d94ee46cb566efd9fb6b5c07b17.scope: Deactivated successfully.
Jan 27 08:37:36 compute-0 sudo[131416]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:37:36 compute-0 ceph-mon[74357]: pgmap v404: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:37:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:37:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:37:36 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a1e53d6a-7c97-494f-bf1d-8006c13c9ba0 does not exist
Jan 27 08:37:36 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 738793ab-aa7f-4b09-8239-74985ab3a771 does not exist
Jan 27 08:37:36 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b43aaf45-6a2b-4224-98ea-812297fa86e3 does not exist
Jan 27 08:37:36 compute-0 sudo[132018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:36 compute-0 sudo[132018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:36 compute-0 sudo[132018]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:36 compute-0 sudo[132094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zadlhugbutbwhyuxrbbetmwdonsvryxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503056.6900022-278-44300674596656/AnsiballZ_file.py'
Jan 27 08:37:36 compute-0 sudo[132094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:36 compute-0 sudo[132072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:37:36 compute-0 sudo[132072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:36 compute-0 sudo[132072]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:36.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:37:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:37:37 compute-0 python3.9[132107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:37 compute-0 sudo[132094]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:37 compute-0 sudo[132260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omuznrizwquyuvlszpjuibwjbgrgxpyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503057.2595706-278-240449659675013/AnsiballZ_file.py'
Jan 27 08:37:37 compute-0 sudo[132260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:37 compute-0 python3.9[132262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:37 compute-0 sudo[132260]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:37:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:37:38 compute-0 sudo[132412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xglqnaffhgsqvphdiabegypgiyftdcju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503057.8895156-318-34643478698703/AnsiballZ_stat.py'
Jan 27 08:37:38 compute-0 sudo[132412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:38 compute-0 python3.9[132414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:38 compute-0 sudo[132412]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:38 compute-0 sudo[132535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyamdrlwvrlrtlsbtwzcgqmdngaazlfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503057.8895156-318-34643478698703/AnsiballZ_copy.py'
Jan 27 08:37:38 compute-0 sudo[132535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:38.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:39 compute-0 python3.9[132537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503057.8895156-318-34643478698703/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2a5c3e00c0cd26ac80e96d84666835fdd9601b6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:39 compute-0 sudo[132535]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:39 compute-0 ceph-mon[74357]: pgmap v405: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:39 compute-0 sudo[132688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnoujjtcqoblqwtneneadqwrqooazwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503059.1747787-318-66497869758572/AnsiballZ_stat.py'
Jan 27 08:37:39 compute-0 sudo[132688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:39 compute-0 python3.9[132690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:39 compute-0 sudo[132688]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:40 compute-0 sudo[132811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgkjuzkdrzlllxiuyeajmxsayffyhue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503059.1747787-318-66497869758572/AnsiballZ_copy.py'
Jan 27 08:37:40 compute-0 sudo[132811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:40 compute-0 python3.9[132813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503059.1747787-318-66497869758572/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e7c8b3cb93399f7a3b488a872a7cf05e4625091e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:40 compute-0 sudo[132811]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:40 compute-0 sudo[132963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmwralppwxldxxanagvnmbysxeovnygi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503060.416791-318-59030155781432/AnsiballZ_stat.py'
Jan 27 08:37:40 compute-0 sudo[132963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:40 compute-0 python3.9[132965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:40 compute-0 sudo[132963]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:40.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:41 compute-0 sudo[133087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbquetxkmynlmlzrutxqumkkrqtwpwwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503060.416791-318-59030155781432/AnsiballZ_copy.py'
Jan 27 08:37:41 compute-0 sudo[133087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:41 compute-0 python3.9[133089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503060.416791-318-59030155781432/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=bd248bfd841ca2f3bcb34b5cdfaca22757b07487 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:41 compute-0 sudo[133087]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:42 compute-0 sudo[133239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksypacacqbbruejagxqkyxsxsiuwnat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503061.799456-445-130088502531685/AnsiballZ_file.py'
Jan 27 08:37:42 compute-0 sudo[133239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:42 compute-0 python3.9[133241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:42 compute-0 sudo[133239]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:42 compute-0 sudo[133391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkbycnwhtfpdjaxfcszocpkrzqismuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503062.6037056-445-139041780152983/AnsiballZ_file.py'
Jan 27 08:37:42 compute-0 sudo[133391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:42.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:43 compute-0 python3.9[133393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:43 compute-0 sudo[133391]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:43 compute-0 sudo[133544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bflsqggzzrhmlmffdlrghezfqmdbgxwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503063.331492-493-151453872950739/AnsiballZ_stat.py'
Jan 27 08:37:43 compute-0 sudo[133544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:43 compute-0 python3.9[133546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:43 compute-0 sudo[133544]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:44 compute-0 sudo[133667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruorieabffiosnjpfiepccdehdbcwckj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503063.331492-493-151453872950739/AnsiballZ_copy.py'
Jan 27 08:37:44 compute-0 sudo[133667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:44 compute-0 python3.9[133669]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503063.331492-493-151453872950739/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2a5632cd96ce76d94a9d2707fe1eaf5df5e20e80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:44 compute-0 sudo[133667]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:44.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:45 compute-0 sudo[133819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkjpcnlixpmndtpnfdomdehckznzxekj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503064.731033-493-6912151449928/AnsiballZ_stat.py'
Jan 27 08:37:45 compute-0 sudo[133819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:45 compute-0 python3.9[133821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:45 compute-0 sudo[133819]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:45 compute-0 sudo[133943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivucgvykidsuuatbbqvkhoifyzkjehvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503064.731033-493-6912151449928/AnsiballZ_copy.py'
Jan 27 08:37:45 compute-0 sudo[133943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:45 compute-0 ceph-mon[74357]: pgmap v406: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:45 compute-0 python3.9[133945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503064.731033-493-6912151449928/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e7c8b3cb93399f7a3b488a872a7cf05e4625091e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:45 compute-0 sudo[133943]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:46 compute-0 sudo[134095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outdxoktyfgqtbibwplyqbstqnbgaznl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503065.9637902-493-275845748366003/AnsiballZ_stat.py'
Jan 27 08:37:46 compute-0 sudo[134095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:46 compute-0 python3.9[134097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:46 compute-0 sudo[134095]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:46 compute-0 sudo[134218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfgsoaomdddtfccpddtnhnvzlghochlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503065.9637902-493-275845748366003/AnsiballZ_copy.py'
Jan 27 08:37:46 compute-0 sudo[134218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:46.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:46 compute-0 python3.9[134220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503065.9637902-493-275845748366003/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=01f3c5f97ddc946a7c9d2bd18269fc3e649dafff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:46.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:47 compute-0 sudo[134218]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:48 compute-0 sudo[134371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrcrjpmcmjztjrxvxfchnoupuepmkckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503068.2450092-628-278010577100524/AnsiballZ_file.py'
Jan 27 08:37:48 compute-0 sudo[134371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:48 compute-0 python3.9[134373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:48 compute-0 sudo[134371]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:37:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:48.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:37:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:48.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:49 compute-0 ceph-mon[74357]: pgmap v407: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:49 compute-0 ceph-mon[74357]: pgmap v408: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:49 compute-0 ceph-mon[74357]: pgmap v409: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:49 compute-0 sudo[134525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxdulyyfvjvccundcsurppokgpxdkhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503068.9432056-650-210967346259914/AnsiballZ_stat.py'
Jan 27 08:37:49 compute-0 sudo[134525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:49 compute-0 python3.9[134527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:49 compute-0 sudo[134525]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:49 compute-0 sudo[134648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndetkqodqmbbzimmdsmtjigoilxezxtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503068.9432056-650-210967346259914/AnsiballZ_copy.py'
Jan 27 08:37:49 compute-0 sudo[134648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:49 compute-0 python3.9[134650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503068.9432056-650-210967346259914/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:50 compute-0 sudo[134648]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:50 compute-0 sudo[134800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykltoflqrfqvjqqzdziyvanhpnbvanyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503070.246277-697-98144585828385/AnsiballZ_file.py'
Jan 27 08:37:50 compute-0 sudo[134800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:50 compute-0 python3.9[134802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:50 compute-0 sudo[134800]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:50 compute-0 ceph-mon[74357]: pgmap v410: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:50 compute-0 ceph-mon[74357]: pgmap v411: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:50.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:51.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:51 compute-0 sudo[134953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvzkwxwnnkmmxghmdpyjwkttqmqppzbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503070.9976394-716-80297586042160/AnsiballZ_stat.py'
Jan 27 08:37:51 compute-0 sudo[134953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:51 compute-0 python3.9[134955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:51 compute-0 sudo[134953]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:52 compute-0 sudo[135076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybotxoicubxjwiulflnsklyqfxcyceg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503070.9976394-716-80297586042160/AnsiballZ_copy.py'
Jan 27 08:37:52 compute-0 sudo[135076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:52 compute-0 python3.9[135078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503070.9976394-716-80297586042160/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:52 compute-0 sudo[135076]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:52 compute-0 ceph-mon[74357]: pgmap v412: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:52.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:53 compute-0 sudo[135229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvojysxbmtncwnzkaprjmsletrpzlmar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503072.9124386-781-228825233906735/AnsiballZ_file.py'
Jan 27 08:37:53 compute-0 sudo[135229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:53 compute-0 python3.9[135231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:53 compute-0 sudo[135229]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:53 compute-0 sudo[135381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdkselojlgwdidtgueahrylgmvcermt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503073.5867307-808-6633689015218/AnsiballZ_stat.py'
Jan 27 08:37:53 compute-0 sudo[135381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:54 compute-0 python3.9[135383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:54 compute-0 sudo[135381]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:54 compute-0 sudo[135504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxsynxmkbqlmbppjnuuzuogdljpfgtun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503073.5867307-808-6633689015218/AnsiballZ_copy.py'
Jan 27 08:37:54 compute-0 sudo[135504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:54 compute-0 python3.9[135506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503073.5867307-808-6633689015218/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:54 compute-0 sudo[135504]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:37:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:54.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:37:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:55.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:55 compute-0 ceph-mon[74357]: pgmap v413: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:55 compute-0 sudo[135657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxldcfexjokscfjhtyzxccrepgheggar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503074.8604655-848-77331277020689/AnsiballZ_file.py'
Jan 27 08:37:55 compute-0 sudo[135657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:55 compute-0 python3.9[135659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:55 compute-0 sudo[135657]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:55 compute-0 sudo[135684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:55 compute-0 sudo[135684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:55 compute-0 sudo[135684]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:55 compute-0 sudo[135732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:37:55 compute-0 sudo[135732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:37:55 compute-0 sudo[135732]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:56 compute-0 sudo[135859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihrnbzkjtzxzdshasbnezokkqkhbdmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503075.7305915-872-17919382508886/AnsiballZ_stat.py'
Jan 27 08:37:56 compute-0 sudo[135859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:56 compute-0 python3.9[135861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:56 compute-0 sudo[135859]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:56 compute-0 sudo[135982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzkvdrxitaqoxizukyogpdyqlrofptvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503075.7305915-872-17919382508886/AnsiballZ_copy.py'
Jan 27 08:37:56 compute-0 sudo[135982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:56 compute-0 ceph-mon[74357]: pgmap v414: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:56 compute-0 python3.9[135984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503075.7305915-872-17919382508886/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:56 compute-0 sudo[135982]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:56.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:37:57 compute-0 sudo[136135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zulscjngifpmlxxwyxrqlvlmqafcmnyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503077.0373437-919-252135078328461/AnsiballZ_file.py'
Jan 27 08:37:57 compute-0 sudo[136135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:57 compute-0 python3.9[136137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:57 compute-0 sudo[136135]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:57 compute-0 sudo[136287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoouqacahywjxkbuozqqetkhjkzqdfvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503077.6208797-938-254467589584975/AnsiballZ_stat.py'
Jan 27 08:37:57 compute-0 sudo[136287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:58 compute-0 python3.9[136289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:37:58 compute-0 sudo[136287]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:58 compute-0 sudo[136410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndmbbpqviyycdiyglnusnffofxpelyao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503077.6208797-938-254467589584975/AnsiballZ_copy.py'
Jan 27 08:37:58 compute-0 sudo[136410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:58 compute-0 python3.9[136412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503077.6208797-938-254467589584975/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:37:58 compute-0 sudo[136410]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:58 compute-0 ceph-mon[74357]: pgmap v415: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:37:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:37:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:37:58.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:37:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:37:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:37:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:37:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:37:59 compute-0 sudo[136563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgnfqrgvxhncytuglresblckgacbyhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503078.815863-976-35277751880908/AnsiballZ_file.py'
Jan 27 08:37:59 compute-0 sudo[136563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:37:59 compute-0 python3.9[136565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:37:59 compute-0 sudo[136563]: pam_unix(sudo:session): session closed for user root
Jan 27 08:37:59 compute-0 sudo[136715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doxaosocmnkxoddyspvkpxmmpkmrijgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503079.4981542-999-129164179840922/AnsiballZ_stat.py'
Jan 27 08:37:59 compute-0 sudo[136715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:00 compute-0 ceph-mon[74357]: pgmap v416: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:00 compute-0 python3.9[136717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:00 compute-0 sudo[136715]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:00 compute-0 sudo[136838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csuxxvkwpkmbeyomhcaehbihmyyivqte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503079.4981542-999-129164179840922/AnsiballZ_copy.py'
Jan 27 08:38:00 compute-0 sudo[136838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:00 compute-0 python3.9[136840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503079.4981542-999-129164179840922/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7fb5f5782584574169f631b3aaaac1ffc15b0eb1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:00 compute-0 sudo[136838]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:00.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:38:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:38:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:02 compute-0 ceph-mon[74357]: pgmap v417: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:02.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:03.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:04 compute-0 ceph-mon[74357]: pgmap v418: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:04.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:05.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:06 compute-0 sshd-session[129774]: Connection closed by 192.168.122.30 port 57524
Jan 27 08:38:06 compute-0 sshd-session[129771]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:38:06 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 27 08:38:06 compute-0 systemd[1]: session-44.scope: Consumed 22.183s CPU time.
Jan 27 08:38:06 compute-0 systemd-logind[799]: Session 44 logged out. Waiting for processes to exit.
Jan 27 08:38:06 compute-0 systemd-logind[799]: Removed session 44.
Jan 27 08:38:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:06 compute-0 ceph-mon[74357]: pgmap v419: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:06.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:07.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:08 compute-0 ceph-mon[74357]: pgmap v420: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:08.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:09.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:10 compute-0 ceph-mon[74357]: pgmap v421: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:10.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:11 compute-0 sshd-session[136871]: Accepted publickey for zuul from 192.168.122.30 port 38998 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:38:11 compute-0 systemd-logind[799]: New session 45 of user zuul.
Jan 27 08:38:11 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 27 08:38:11 compute-0 sshd-session[136871]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:38:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:12 compute-0 ceph-mon[74357]: pgmap v422: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:12 compute-0 sudo[137024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajldfapltekbxtglxgbhtffdmpqctyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503091.9788127-26-97968577018906/AnsiballZ_file.py'
Jan 27 08:38:12 compute-0 sudo[137024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:12 compute-0 python3.9[137026]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:12 compute-0 sudo[137024]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:13.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:13.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:13 compute-0 sudo[137177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylymschjiliseumowdvnjairocgvbqzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503093.0911763-62-244004952376505/AnsiballZ_stat.py'
Jan 27 08:38:13 compute-0 sudo[137177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:13 compute-0 python3.9[137179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:13 compute-0 sudo[137177]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:14 compute-0 sudo[137300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmawiomdqewkkysjycobnkvunuxozhxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503093.0911763-62-244004952376505/AnsiballZ_copy.py'
Jan 27 08:38:14 compute-0 sudo[137300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:14 compute-0 python3.9[137302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503093.0911763-62-244004952376505/.source.conf _original_basename=ceph.conf follow=False checksum=220841d078adbec2d7092c2af6c7e486c5aef931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:14 compute-0 sudo[137300]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:14 compute-0 ceph-mon[74357]: pgmap v423: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:14 compute-0 sudo[137452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqpbjzbunrlfyzpnazojyazcjbxoqoby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503094.560298-62-198566822344548/AnsiballZ_stat.py'
Jan 27 08:38:14 compute-0 sudo[137452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:38:14
Jan 27 08:38:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:38:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:38:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Jan 27 08:38:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:38:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:15 compute-0 python3.9[137454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:15 compute-0 sudo[137452]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:15.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:38:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:38:15 compute-0 sudo[137576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swsmqwrfjyleecssjmobfticwmzpkdrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503094.560298-62-198566822344548/AnsiballZ_copy.py'
Jan 27 08:38:15 compute-0 sudo[137576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:15 compute-0 python3.9[137578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503094.560298-62-198566822344548/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=a78720c7651b641fc0d432dbe481248898ae80a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:15 compute-0 sudo[137576]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:15 compute-0 sudo[137603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:15 compute-0 sudo[137603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:15 compute-0 sudo[137603]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:15 compute-0 sudo[137628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:15 compute-0 sudo[137628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:15 compute-0 sudo[137628]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:16 compute-0 sshd-session[136874]: Connection closed by 192.168.122.30 port 38998
Jan 27 08:38:16 compute-0 sshd-session[136871]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:38:16 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 27 08:38:16 compute-0 systemd[1]: session-45.scope: Consumed 2.576s CPU time.
Jan 27 08:38:16 compute-0 systemd-logind[799]: Session 45 logged out. Waiting for processes to exit.
Jan 27 08:38:16 compute-0 systemd-logind[799]: Removed session 45.
Jan 27 08:38:16 compute-0 ceph-mon[74357]: pgmap v424: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:18 compute-0 ceph-mon[74357]: pgmap v425: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:19.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:20 compute-0 ceph-mon[74357]: pgmap v426: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:21.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:21.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:22 compute-0 ceph-mon[74357]: pgmap v427: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:23 compute-0 sshd-session[137657]: Accepted publickey for zuul from 192.168.122.30 port 45510 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:38:23 compute-0 systemd-logind[799]: New session 46 of user zuul.
Jan 27 08:38:23 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 27 08:38:23 compute-0 sshd-session[137657]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:38:24 compute-0 ceph-mon[74357]: pgmap v428: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:24 compute-0 python3.9[137810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:38:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:25.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:26 compute-0 sudo[137965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwblnilbzofdznsxvypvacixyeodfyys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503105.6609738-62-9442939312729/AnsiballZ_file.py'
Jan 27 08:38:26 compute-0 sudo[137965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:26 compute-0 python3.9[137967]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:38:26 compute-0 sudo[137965]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:26 compute-0 sudo[138117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtnhfhitvorprzrnsuvuuksqifenvvvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503106.4540474-62-32436890611189/AnsiballZ_file.py'
Jan 27 08:38:26 compute-0 sudo[138117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:26 compute-0 ceph-mon[74357]: pgmap v429: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:26 compute-0 python3.9[138119]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:38:26 compute-0 sudo[138117]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:27.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:27 compute-0 python3.9[138270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:38:28 compute-0 sudo[138420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eupqxxukkacaeimxjudfwpjohizvwjbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503108.3019063-131-50976706511600/AnsiballZ_seboolean.py'
Jan 27 08:38:28 compute-0 sudo[138420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:29 compute-0 ceph-mon[74357]: pgmap v430: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:29.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:29 compute-0 python3.9[138422]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 08:38:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:30 compute-0 sudo[138420]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:30 compute-0 ceph-mon[74357]: pgmap v431: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:31.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:31.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:31 compute-0 sudo[138578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxoferoedndpknflscawykjodyjaumws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503111.0548596-161-277888962414123/AnsiballZ_setup.py'
Jan 27 08:38:31 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 27 08:38:31 compute-0 sudo[138578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:31 compute-0 python3.9[138580]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:38:31 compute-0 sudo[138578]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:32 compute-0 sudo[138662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiojflwsrutahvbkkrdbvzvwqwtmhyxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503111.0548596-161-277888962414123/AnsiballZ_dnf.py'
Jan 27 08:38:32 compute-0 sudo[138662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:32 compute-0 ceph-mon[74357]: pgmap v432: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:32 compute-0 python3.9[138664]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:38:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:33.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:33.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:34 compute-0 sudo[138662]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:34 compute-0 ceph-mon[74357]: pgmap v433: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:35.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:35 compute-0 sudo[138817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijjznpbfewsctzaiwdhufuhfamdvudpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503114.4280384-197-74202083760119/AnsiballZ_systemd.py'
Jan 27 08:38:35 compute-0 sudo[138817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:35.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:35 compute-0 python3.9[138819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:38:35 compute-0 sudo[138817]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:36 compute-0 sudo[138899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:36 compute-0 sudo[138899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:36 compute-0 sudo[138899]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:36 compute-0 sudo[138947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:36 compute-0 sudo[138947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:36 compute-0 sudo[138947]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:36 compute-0 sudo[139022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snmhidpzxbipauterpipyhmbopepaeuc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503115.7309577-221-109392065875023/AnsiballZ_edpm_nftables_snippet.py'
Jan 27 08:38:36 compute-0 sudo[139022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:37.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:37.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:37 compute-0 ceph-mon[74357]: pgmap v434: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:37 compute-0 sudo[139026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:37 compute-0 sudo[139026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:37 compute-0 python3[139024]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 27 08:38:37 compute-0 sudo[139026]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:37 compute-0 sudo[139022]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:37 compute-0 sudo[139051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:38:37 compute-0 sudo[139051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:37 compute-0 sudo[139051]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:37 compute-0 sudo[139083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:37 compute-0 sudo[139083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:37 compute-0 sudo[139083]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:37 compute-0 sudo[139125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 27 08:38:37 compute-0 sudo[139125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:37 compute-0 sudo[139125]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:38:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:38:37 compute-0 sudo[139295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqapzsauuamlykxgayqxlnjyrfiyqxeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503117.6427858-248-4883938706977/AnsiballZ_file.py'
Jan 27 08:38:37 compute-0 sudo[139295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:38 compute-0 sudo[139298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:38 compute-0 sudo[139298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:38 compute-0 sudo[139298]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:38 compute-0 sudo[139323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:38:38 compute-0 python3.9[139297]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:38 compute-0 sudo[139323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:38 compute-0 sudo[139323]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:38 compute-0 sudo[139295]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:38 compute-0 sudo[139348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:38 compute-0 sudo[139348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:38 compute-0 sudo[139348]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:38 compute-0 sudo[139396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:38:38 compute-0 sudo[139396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:38 compute-0 sudo[139396]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:38:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:38:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:38:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:38:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:38:38 compute-0 sudo[139577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxpumaabmkwhknqqcgcobkvspehxqot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503118.4518998-272-118134027053114/AnsiballZ_stat.py'
Jan 27 08:38:38 compute-0 sudo[139577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:38 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 39cb2336-7bbd-4027-9c24-21c9486ff648 does not exist
Jan 27 08:38:38 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9198a4a6-c290-4e8c-8cd3-5460e488c31e does not exist
Jan 27 08:38:38 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 34bc5337-86df-4519-8a41-48a3e765dcb0 does not exist
Jan 27 08:38:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:38:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:38:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:38:38 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:38:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:38:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:38:38 compute-0 ceph-mon[74357]: pgmap v435: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:38:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:38:38 compute-0 sudo[139580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:38 compute-0 sudo[139580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:38 compute-0 sudo[139580]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:39 compute-0 sudo[139605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:38:39 compute-0 sudo[139605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:39 compute-0 sudo[139605]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:39.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:39 compute-0 sudo[139630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:39 compute-0 sudo[139630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:39 compute-0 sudo[139630]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:39 compute-0 python3.9[139579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:39 compute-0 sudo[139577]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:39 compute-0 sudo[139656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:38:39 compute-0 sudo[139656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:39 compute-0 sudo[139785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajiorhzgsybfunjznkgeqpserovsrezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503118.4518998-272-118134027053114/AnsiballZ_file.py'
Jan 27 08:38:39 compute-0 sudo[139785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:39 compute-0 podman[139800]: 2026-01-27 08:38:39.417762473 +0000 UTC m=+0.020071656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:38:39 compute-0 python3.9[139796]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:39 compute-0 sudo[139785]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:39 compute-0 podman[139800]: 2026-01-27 08:38:39.609199147 +0000 UTC m=+0.211508310 container create 4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:38:39 compute-0 systemd[1]: Started libpod-conmon-4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7.scope.
Jan 27 08:38:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:38:39 compute-0 podman[139800]: 2026-01-27 08:38:39.764453823 +0000 UTC m=+0.366763016 container init 4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:38:39 compute-0 podman[139800]: 2026-01-27 08:38:39.773657807 +0000 UTC m=+0.375966970 container start 4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:38:39 compute-0 exciting_perlman[139840]: 167 167
Jan 27 08:38:39 compute-0 systemd[1]: libpod-4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7.scope: Deactivated successfully.
Jan 27 08:38:39 compute-0 podman[139800]: 2026-01-27 08:38:39.807633645 +0000 UTC m=+0.409942848 container attach 4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:38:39 compute-0 podman[139800]: 2026-01-27 08:38:39.808125689 +0000 UTC m=+0.410434862 container died 4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:38:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:38:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:38:40 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:38:40 compute-0 ceph-mon[74357]: pgmap v436: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c53830a2b1a2b258bf5ff9434bd2b8af0245e31a754d2196b2be11fd3f722cd2-merged.mount: Deactivated successfully.
Jan 27 08:38:40 compute-0 sudo[139983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhomflajuvzrnegugjswcbrbybbhfyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503119.9074743-308-110510230103835/AnsiballZ_stat.py'
Jan 27 08:38:40 compute-0 sudo[139983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:40 compute-0 podman[139800]: 2026-01-27 08:38:40.466275817 +0000 UTC m=+1.068584990 container remove 4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:38:40 compute-0 systemd[1]: libpod-conmon-4918e90af8b0c225e116172b7722044bcbb4c00d7ed7fed9a5abc29df716e5a7.scope: Deactivated successfully.
Jan 27 08:38:40 compute-0 python3.9[139985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:40 compute-0 sudo[139983]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:40 compute-0 podman[139994]: 2026-01-27 08:38:40.668075678 +0000 UTC m=+0.050082363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:38:40 compute-0 podman[139994]: 2026-01-27 08:38:40.794530369 +0000 UTC m=+0.176536984 container create 620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noether, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:38:40 compute-0 systemd[1]: Started libpod-conmon-620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2.scope.
Jan 27 08:38:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1812b575d3ae431744434fcbd127161a5b2f207d2ac7c0cb0d74d3afe3b6983b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1812b575d3ae431744434fcbd127161a5b2f207d2ac7c0cb0d74d3afe3b6983b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1812b575d3ae431744434fcbd127161a5b2f207d2ac7c0cb0d74d3afe3b6983b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1812b575d3ae431744434fcbd127161a5b2f207d2ac7c0cb0d74d3afe3b6983b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1812b575d3ae431744434fcbd127161a5b2f207d2ac7c0cb0d74d3afe3b6983b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:40 compute-0 podman[139994]: 2026-01-27 08:38:40.95684722 +0000 UTC m=+0.338853835 container init 620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:38:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:40 compute-0 podman[139994]: 2026-01-27 08:38:40.963784891 +0000 UTC m=+0.345791496 container start 620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noether, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:38:40 compute-0 sudo[140088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prvqsxpkwaugmpqfugbvzrdaymudkhtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503119.9074743-308-110510230103835/AnsiballZ_file.py'
Jan 27 08:38:40 compute-0 sudo[140088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:41 compute-0 podman[139994]: 2026-01-27 08:38:41.003762185 +0000 UTC m=+0.385768790 container attach 620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:38:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:41.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:38:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:38:41 compute-0 python3.9[140093]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6n2lhpk5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:41 compute-0 sudo[140088]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:41 compute-0 vigilant_noether[140050]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:38:41 compute-0 vigilant_noether[140050]: --> relative data size: 1.0
Jan 27 08:38:41 compute-0 vigilant_noether[140050]: --> All data devices are unavailable
Jan 27 08:38:41 compute-0 sudo[140254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbetedcxydmyiwfggxqqzjrfwjjzibpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503121.412714-344-173652595682846/AnsiballZ_stat.py'
Jan 27 08:38:41 compute-0 sudo[140254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:41 compute-0 systemd[1]: libpod-620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2.scope: Deactivated successfully.
Jan 27 08:38:41 compute-0 podman[139994]: 2026-01-27 08:38:41.734767805 +0000 UTC m=+1.116774410 container died 620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noether, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:38:41 compute-0 python3.9[140256]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1812b575d3ae431744434fcbd127161a5b2f207d2ac7c0cb0d74d3afe3b6983b-merged.mount: Deactivated successfully.
Jan 27 08:38:41 compute-0 sudo[140254]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:42 compute-0 podman[139994]: 2026-01-27 08:38:42.192722447 +0000 UTC m=+1.574729042 container remove 620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:38:42 compute-0 systemd[1]: libpod-conmon-620da14beafd3868924bb3629539a712ca90ec52279744ee2313d319da5e34f2.scope: Deactivated successfully.
Jan 27 08:38:42 compute-0 sudo[139656]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:42 compute-0 sudo[140320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:42 compute-0 sudo[140320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:42 compute-0 sudo[140320]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:42 compute-0 sudo[140371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbrzxwldemceboykqccayfxvrppjeir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503121.412714-344-173652595682846/AnsiballZ_file.py'
Jan 27 08:38:42 compute-0 sudo[140371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:42 compute-0 sudo[140372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:38:42 compute-0 sudo[140372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:42 compute-0 sudo[140372]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:42 compute-0 sudo[140399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:42 compute-0 sudo[140399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:42 compute-0 sudo[140399]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:42 compute-0 sudo[140424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:38:42 compute-0 sudo[140424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:42 compute-0 ceph-mon[74357]: pgmap v437: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:42 compute-0 python3.9[140386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:42 compute-0 sudo[140371]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:42 compute-0 podman[140515]: 2026-01-27 08:38:42.864749708 +0000 UTC m=+0.056352576 container create ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rosalind, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:38:42 compute-0 systemd[1]: Started libpod-conmon-ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6.scope.
Jan 27 08:38:42 compute-0 podman[140515]: 2026-01-27 08:38:42.831174541 +0000 UTC m=+0.022777459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:38:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:38:42 compute-0 podman[140515]: 2026-01-27 08:38:42.957459407 +0000 UTC m=+0.149062305 container init ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Jan 27 08:38:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:42 compute-0 podman[140515]: 2026-01-27 08:38:42.968701618 +0000 UTC m=+0.160304496 container start ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:38:42 compute-0 reverent_rosalind[140553]: 167 167
Jan 27 08:38:42 compute-0 systemd[1]: libpod-ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6.scope: Deactivated successfully.
Jan 27 08:38:42 compute-0 conmon[140553]: conmon ff0150943ffcc4ce27f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6.scope/container/memory.events
Jan 27 08:38:42 compute-0 podman[140515]: 2026-01-27 08:38:42.976278097 +0000 UTC m=+0.167880985 container attach ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:38:42 compute-0 podman[140515]: 2026-01-27 08:38:42.976722539 +0000 UTC m=+0.168325407 container died ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 08:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf86ef9012e269047fe63cda406086153c8651d4f99bcffa5cd707157a95876-merged.mount: Deactivated successfully.
Jan 27 08:38:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:43.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:43.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:43 compute-0 podman[140515]: 2026-01-27 08:38:43.119605753 +0000 UTC m=+0.311208621 container remove ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:38:43 compute-0 systemd[1]: libpod-conmon-ff0150943ffcc4ce27f8970973fc45ee369ae609999d86266a5501eb032157b6.scope: Deactivated successfully.
Jan 27 08:38:43 compute-0 podman[140630]: 2026-01-27 08:38:43.324544601 +0000 UTC m=+0.087892927 container create e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:38:43 compute-0 podman[140630]: 2026-01-27 08:38:43.257725616 +0000 UTC m=+0.021073962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:38:43 compute-0 systemd[1]: Started libpod-conmon-e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2.scope.
Jan 27 08:38:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:38:43 compute-0 sudo[140697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndicutrkbrhfrpheufvgfkaqnpstbrqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503122.9076731-383-167971466510666/AnsiballZ_command.py'
Jan 27 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59848d4672d6fbba49fc2a776696eb4fd529e1249def856e7c79cb13cce63dc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59848d4672d6fbba49fc2a776696eb4fd529e1249def856e7c79cb13cce63dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59848d4672d6fbba49fc2a776696eb4fd529e1249def856e7c79cb13cce63dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59848d4672d6fbba49fc2a776696eb4fd529e1249def856e7c79cb13cce63dc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:43 compute-0 sudo[140697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:43 compute-0 podman[140630]: 2026-01-27 08:38:43.407513781 +0000 UTC m=+0.170862127 container init e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:38:43 compute-0 podman[140630]: 2026-01-27 08:38:43.417125046 +0000 UTC m=+0.180473372 container start e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:38:43 compute-0 podman[140630]: 2026-01-27 08:38:43.431050981 +0000 UTC m=+0.194399337 container attach e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:38:43 compute-0 python3.9[140702]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:38:43 compute-0 sudo[140697]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]: {
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:     "0": [
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:         {
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "devices": [
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "/dev/loop3"
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             ],
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "lv_name": "ceph_lv0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "lv_size": "7511998464",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "name": "ceph_lv0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "tags": {
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.cluster_name": "ceph",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.crush_device_class": "",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.encrypted": "0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.osd_id": "0",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.type": "block",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:                 "ceph.vdo": "0"
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             },
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "type": "block",
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:             "vg_name": "ceph_vg0"
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:         }
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]:     ]
Jan 27 08:38:44 compute-0 priceless_keldysh[140698]: }
Jan 27 08:38:44 compute-0 systemd[1]: libpod-e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2.scope: Deactivated successfully.
Jan 27 08:38:44 compute-0 podman[140786]: 2026-01-27 08:38:44.262162204 +0000 UTC m=+0.024983070 container died e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-59848d4672d6fbba49fc2a776696eb4fd529e1249def856e7c79cb13cce63dc0-merged.mount: Deactivated successfully.
Jan 27 08:38:44 compute-0 podman[140786]: 2026-01-27 08:38:44.31273706 +0000 UTC m=+0.075557896 container remove e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:38:44 compute-0 systemd[1]: libpod-conmon-e146d6988a36b398b47d7896a2a44c85aba7fec62d89ffd3593cc6cf51a630d2.scope: Deactivated successfully.
Jan 27 08:38:44 compute-0 sudo[140424]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:44 compute-0 sudo[140844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:44 compute-0 sudo[140844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:44 compute-0 sudo[140844]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:44 compute-0 sudo[140919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphaoweukpfzoenglkckbnirnbaccjll ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503124.0007908-407-122124255588248/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 08:38:44 compute-0 sudo[140919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:44 compute-0 sudo[140887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:38:44 compute-0 sudo[140887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:44 compute-0 sudo[140887]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:44 compute-0 sudo[140930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:44 compute-0 sudo[140930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:44 compute-0 sudo[140930]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:44 compute-0 sudo[140955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:38:44 compute-0 sudo[140955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:44 compute-0 ceph-mon[74357]: pgmap v438: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:44 compute-0 python3[140927]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:38:44 compute-0 sudo[140919]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:44 compute-0 podman[141043]: 2026-01-27 08:38:44.904821115 +0000 UTC m=+0.039953284 container create 4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:38:44 compute-0 systemd[1]: Started libpod-conmon-4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2.scope.
Jan 27 08:38:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:38:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:44 compute-0 podman[141043]: 2026-01-27 08:38:44.973596574 +0000 UTC m=+0.108728763 container init 4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:38:44 compute-0 podman[141043]: 2026-01-27 08:38:44.97999709 +0000 UTC m=+0.115129259 container start 4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:38:44 compute-0 podman[141043]: 2026-01-27 08:38:44.983466306 +0000 UTC m=+0.118598475 container attach 4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:38:44 compute-0 podman[141043]: 2026-01-27 08:38:44.889146913 +0000 UTC m=+0.024279112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:38:44 compute-0 beautiful_zhukovsky[141059]: 167 167
Jan 27 08:38:44 compute-0 systemd[1]: libpod-4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2.scope: Deactivated successfully.
Jan 27 08:38:44 compute-0 podman[141043]: 2026-01-27 08:38:44.98616615 +0000 UTC m=+0.121298309 container died 4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5f8937a797b2597343266787353f882ab5c1c685852b0d03963d51051901f8-merged.mount: Deactivated successfully.
Jan 27 08:38:45 compute-0 podman[141043]: 2026-01-27 08:38:45.023779509 +0000 UTC m=+0.158911678 container remove 4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:38:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:45.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:45 compute-0 systemd[1]: libpod-conmon-4c09d4e4765934f2b35919e8c35e1809c394c3a7a20ce6a94dba6fae8a9096e2.scope: Deactivated successfully.
Jan 27 08:38:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:38:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:45.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:38:45 compute-0 podman[141135]: 2026-01-27 08:38:45.166969772 +0000 UTC m=+0.036190430 container create 7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:38:45 compute-0 systemd[1]: Started libpod-conmon-7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc.scope.
Jan 27 08:38:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35feb5a2a51548ed46d0409957339ae8a194fbe640cac93e0b04ba821a5de21d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35feb5a2a51548ed46d0409957339ae8a194fbe640cac93e0b04ba821a5de21d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35feb5a2a51548ed46d0409957339ae8a194fbe640cac93e0b04ba821a5de21d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35feb5a2a51548ed46d0409957339ae8a194fbe640cac93e0b04ba821a5de21d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:45 compute-0 podman[141135]: 2026-01-27 08:38:45.230038513 +0000 UTC m=+0.099259191 container init 7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:38:45 compute-0 podman[141135]: 2026-01-27 08:38:45.240514792 +0000 UTC m=+0.109735450 container start 7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:38:45 compute-0 podman[141135]: 2026-01-27 08:38:45.243799773 +0000 UTC m=+0.113020451 container attach 7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:38:45 compute-0 podman[141135]: 2026-01-27 08:38:45.150831536 +0000 UTC m=+0.020052214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:38:45 compute-0 sudo[141230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sulldgreuacsmsicwkjdaybagyvflrrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503125.009039-431-99900531765410/AnsiballZ_stat.py'
Jan 27 08:38:45 compute-0 sudo[141230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:45 compute-0 python3.9[141232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:45 compute-0 sudo[141230]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:46 compute-0 sudo[141369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbzwcunbtgfuhrddiucejrgmwnbvvyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503125.009039-431-99900531765410/AnsiballZ_copy.py'
Jan 27 08:38:46 compute-0 sudo[141369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:46 compute-0 lucid_bell[141175]: {
Jan 27 08:38:46 compute-0 lucid_bell[141175]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:38:46 compute-0 lucid_bell[141175]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:38:46 compute-0 lucid_bell[141175]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:38:46 compute-0 lucid_bell[141175]:         "osd_id": 0,
Jan 27 08:38:46 compute-0 lucid_bell[141175]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:38:46 compute-0 lucid_bell[141175]:         "type": "bluestore"
Jan 27 08:38:46 compute-0 lucid_bell[141175]:     }
Jan 27 08:38:46 compute-0 lucid_bell[141175]: }
Jan 27 08:38:46 compute-0 systemd[1]: libpod-7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc.scope: Deactivated successfully.
Jan 27 08:38:46 compute-0 podman[141135]: 2026-01-27 08:38:46.078879735 +0000 UTC m=+0.948100393 container died 7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-35feb5a2a51548ed46d0409957339ae8a194fbe640cac93e0b04ba821a5de21d-merged.mount: Deactivated successfully.
Jan 27 08:38:46 compute-0 python3.9[141371]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503125.009039-431-99900531765410/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:46 compute-0 sudo[141369]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:46 compute-0 podman[141135]: 2026-01-27 08:38:46.332155228 +0000 UTC m=+1.201375876 container remove 7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:38:46 compute-0 systemd[1]: libpod-conmon-7fedc0d24fc396d4dc06effeb1be5e60c2d74db7fa1da0d94932fbd31251c9dc.scope: Deactivated successfully.
Jan 27 08:38:46 compute-0 sudo[140955]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:38:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:38:46 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:46 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c1912af8-dbf4-47be-9b97-1401c92446fb does not exist
Jan 27 08:38:46 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 82109767-932a-4ddd-8fec-849365c8adee does not exist
Jan 27 08:38:46 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c367df34-0e71-4bc7-9305-9d1f1dd8e22f does not exist
Jan 27 08:38:46 compute-0 sudo[141410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:46 compute-0 sudo[141410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:46 compute-0 sudo[141410]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:46 compute-0 sudo[141447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:38:46 compute-0 sudo[141447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:46 compute-0 sudo[141447]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:46 compute-0 ceph-mon[74357]: pgmap v439: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:46 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:38:46 compute-0 sudo[141585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjaxnrsfrehtxqwxruamjpqrqprbfgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503126.4923291-476-170828005270513/AnsiballZ_stat.py'
Jan 27 08:38:46 compute-0 sudo[141585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:47 compute-0 python3.9[141587]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:47.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:47 compute-0 sudo[141585]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:47.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:47 compute-0 sudo[141711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcjxumcfpvdagoxglmnwacpbiggpqix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503126.4923291-476-170828005270513/AnsiballZ_copy.py'
Jan 27 08:38:47 compute-0 sudo[141711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:47 compute-0 python3.9[141713]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503126.4923291-476-170828005270513/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:47 compute-0 sudo[141711]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:48 compute-0 sudo[141863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeaifwzmojutwivhgxpubkgvhapfsrfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503127.907062-521-275874695378188/AnsiballZ_stat.py'
Jan 27 08:38:48 compute-0 sudo[141863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:48 compute-0 python3.9[141865]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:48 compute-0 sudo[141863]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:48 compute-0 sudo[141988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhntcuvglupjqhsrsiusksztqczxiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503127.907062-521-275874695378188/AnsiballZ_copy.py'
Jan 27 08:38:48 compute-0 sudo[141988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:48 compute-0 ceph-mon[74357]: pgmap v440: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:49 compute-0 python3.9[141990]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503127.907062-521-275874695378188/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:49.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:49 compute-0 sudo[141988]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:49.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:49 compute-0 sudo[142141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isebgwwogzmqcrptcgspjsokltdxeaxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503129.3879735-566-148003870036942/AnsiballZ_stat.py'
Jan 27 08:38:49 compute-0 sudo[142141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:49 compute-0 python3.9[142143]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:49 compute-0 sudo[142141]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:50 compute-0 sudo[142266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcuyizbsmtmibsvnvogfdregprhdmdev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503129.3879735-566-148003870036942/AnsiballZ_copy.py'
Jan 27 08:38:50 compute-0 sudo[142266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:50 compute-0 python3.9[142268]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503129.3879735-566-148003870036942/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:50 compute-0 sudo[142266]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:51 compute-0 ceph-mon[74357]: pgmap v441: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:51.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:51 compute-0 sudo[142419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ursrteclaaqdezhqufifhkclpzlhifbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503130.8188694-611-111490117692795/AnsiballZ_stat.py'
Jan 27 08:38:51 compute-0 sudo[142419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:51 compute-0 python3.9[142421]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:38:51 compute-0 sudo[142419]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:51 compute-0 sudo[142544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbrqxydrfbbimiyjrmpcublgzjmqsrrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503130.8188694-611-111490117692795/AnsiballZ_copy.py'
Jan 27 08:38:51 compute-0 sudo[142544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:52 compute-0 python3.9[142546]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503130.8188694-611-111490117692795/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:52 compute-0 sudo[142544]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:52 compute-0 ceph-mon[74357]: pgmap v442: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:52 compute-0 sudo[142696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moqggifobwqlqwbixvsqisoacchxxymf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503132.5840552-656-184896977566427/AnsiballZ_file.py'
Jan 27 08:38:52 compute-0 sudo[142696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:53.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:53 compute-0 python3.9[142698]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:53 compute-0 sudo[142696]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:53.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:53 compute-0 sudo[142849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djstoxxrybkmxaqdcrrxleakehrbbuop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503133.2573137-680-202765574823135/AnsiballZ_command.py'
Jan 27 08:38:53 compute-0 sudo[142849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:53 compute-0 python3.9[142851]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:38:53 compute-0 sudo[142849]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:54 compute-0 ceph-mon[74357]: pgmap v443: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:54 compute-0 sudo[143004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwdrwvttopzwaykotxxverltguojfzhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503134.0271397-704-142795475098259/AnsiballZ_blockinfile.py'
Jan 27 08:38:54 compute-0 sudo[143004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:54 compute-0 python3.9[143006]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:54 compute-0 sudo[143004]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:55.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:55 compute-0 sudo[143157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawzsymumvcsfcagexfcnouyooatsepl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503135.3329868-731-167917265482051/AnsiballZ_command.py'
Jan 27 08:38:55 compute-0 sudo[143157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:55 compute-0 python3.9[143159]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:38:55 compute-0 sudo[143157]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:56 compute-0 sudo[143238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:56 compute-0 sudo[143238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:56 compute-0 sudo[143238]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:56 compute-0 sudo[143285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:38:56 compute-0 sudo[143285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:38:56 compute-0 sudo[143285]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:56 compute-0 sudo[143360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sttthvfntdbywhdyaliyyyxjjummvrln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503136.0752382-755-86559373130071/AnsiballZ_stat.py'
Jan 27 08:38:56 compute-0 sudo[143360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:56 compute-0 python3.9[143362]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:38:56 compute-0 ceph-mon[74357]: pgmap v444: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:56 compute-0 sudo[143360]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:57.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:57 compute-0 sudo[143515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsctdaeshnqszolbvbusqfonubyapwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503136.7877414-779-148483755634437/AnsiballZ_command.py'
Jan 27 08:38:57 compute-0 sudo[143515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:57.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:38:57 compute-0 python3.9[143517]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:38:57 compute-0 sudo[143515]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:57 compute-0 sudo[143670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxzpsqywjyuggwdyjrtwgtpuzczmueiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503137.5856342-803-257355507594094/AnsiballZ_file.py'
Jan 27 08:38:57 compute-0 sudo[143670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:38:58 compute-0 python3.9[143672]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:38:58 compute-0 sudo[143670]: pam_unix(sudo:session): session closed for user root
Jan 27 08:38:58 compute-0 ceph-mon[74357]: pgmap v445: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:38:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:38:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:38:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:38:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:38:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:38:59 compute-0 python3.9[143823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:39:00 compute-0 ceph-mon[74357]: pgmap v446: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:00 compute-0 sudo[143974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbyjsjkfuyloyedtemtkwafcftpwsxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503140.4056816-923-260370275210725/AnsiballZ_command.py'
Jan 27 08:39:00 compute-0 sudo[143974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:00 compute-0 python3.9[143976]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:39:00 compute-0 ovs-vsctl[143977]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 27 08:39:00 compute-0 sudo[143974]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:01.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:01.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:01 compute-0 sudo[144128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkxevetailgljzthuyyyepohhjyejmpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503141.2542698-950-151864931139347/AnsiballZ_command.py'
Jan 27 08:39:01 compute-0 sudo[144128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:01 compute-0 python3.9[144130]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:39:01 compute-0 sudo[144128]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:02 compute-0 sudo[144283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmzltxkoxpimlsdfvwvtvxhhydonxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503142.0542538-974-19672266097674/AnsiballZ_command.py'
Jan 27 08:39:02 compute-0 sudo[144283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:02 compute-0 python3.9[144285]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:39:02 compute-0 ceph-mon[74357]: pgmap v447: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:02 compute-0 ovs-vsctl[144286]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 27 08:39:02 compute-0 sudo[144283]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:03.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:03.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:03 compute-0 python3.9[144437]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:39:04 compute-0 sudo[144589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paorcoxjxfkiqefaciqnogtnezovocnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503143.79268-1025-230099237654961/AnsiballZ_file.py'
Jan 27 08:39:04 compute-0 sudo[144589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:04 compute-0 python3.9[144591]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:04 compute-0 sudo[144589]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:04 compute-0 ceph-mon[74357]: pgmap v448: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:04 compute-0 sudo[144741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qypacpfcyprhqlxujrkvjbzwcbnxuyua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503144.5346804-1049-277459546791719/AnsiballZ_stat.py'
Jan 27 08:39:04 compute-0 sudo[144741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:39:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:05.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:39:05 compute-0 python3.9[144743]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:05 compute-0 sudo[144741]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:05.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:05 compute-0 sudo[144820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqdxkqkgufxergbxrtfglxibfkluygmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503144.5346804-1049-277459546791719/AnsiballZ_file.py'
Jan 27 08:39:05 compute-0 sudo[144820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:05 compute-0 python3.9[144822]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:05 compute-0 sudo[144820]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:05 compute-0 sudo[144972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbongovtyhoxmqhkdgbsziirpksvssyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503145.6669765-1049-120646936543978/AnsiballZ_stat.py'
Jan 27 08:39:05 compute-0 sudo[144972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:06 compute-0 python3.9[144974]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:06 compute-0 sudo[144972]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:06 compute-0 sudo[145050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwwlkdnktswueqelbtvrbiuufwfbpfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503145.6669765-1049-120646936543978/AnsiballZ_file.py'
Jan 27 08:39:06 compute-0 sudo[145050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:06 compute-0 ceph-mon[74357]: pgmap v449: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:06 compute-0 python3.9[145052]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:06 compute-0 sudo[145050]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:07.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:07 compute-0 sudo[145203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqxkqlhskketxmjpbuhyesqfwdfqgpjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503146.8292081-1118-56008296645136/AnsiballZ_file.py'
Jan 27 08:39:07 compute-0 sudo[145203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:07 compute-0 python3.9[145205]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:07 compute-0 sudo[145203]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:07 compute-0 sudo[145355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdtnkkcuqlpgilodmrqqxwyjnljuqtpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503147.5604782-1142-118349989482619/AnsiballZ_stat.py'
Jan 27 08:39:07 compute-0 sudo[145355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:07 compute-0 python3.9[145357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:08 compute-0 sudo[145355]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:08 compute-0 sudo[145433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxlnmdxagvqghidkpemjsogcyhcdjwee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503147.5604782-1142-118349989482619/AnsiballZ_file.py'
Jan 27 08:39:08 compute-0 sudo[145433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:08 compute-0 python3.9[145435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:08 compute-0 sudo[145433]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:08 compute-0 ceph-mon[74357]: pgmap v450: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:08 compute-0 sudo[145585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwqulsgejwqarilmrcspcrpoenzaqqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503148.6885216-1178-25449797214735/AnsiballZ_stat.py'
Jan 27 08:39:08 compute-0 sudo[145585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:09.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:09.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:09 compute-0 python3.9[145587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:09 compute-0 sudo[145585]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:09 compute-0 sudo[145664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ennvgafdtfxnsbnnlsxecmkyidgvxudu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503148.6885216-1178-25449797214735/AnsiballZ_file.py'
Jan 27 08:39:09 compute-0 sudo[145664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:09 compute-0 python3.9[145666]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:09 compute-0 sudo[145664]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:10 compute-0 sudo[145816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmoelymsyxmkqtywxtfbvozhzdmecqul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503149.9038327-1214-170649491470545/AnsiballZ_systemd.py'
Jan 27 08:39:10 compute-0 sudo[145816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:10 compute-0 python3.9[145818]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:39:10 compute-0 systemd[1]: Reloading.
Jan 27 08:39:10 compute-0 systemd-rc-local-generator[145843]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:39:10 compute-0 systemd-sysv-generator[145847]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:39:10 compute-0 ceph-mon[74357]: pgmap v451: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:10 compute-0 sudo[145816]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:11.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:11 compute-0 sudo[146007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfpkgzsxetvfxdmdrgyihlnjspmwulxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503151.0913386-1238-194159385997130/AnsiballZ_stat.py'
Jan 27 08:39:11 compute-0 sudo[146007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:11 compute-0 python3.9[146009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:11 compute-0 sudo[146007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:11 compute-0 sudo[146085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gweywuiyujignsftlbqoduiksvkprhwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503151.0913386-1238-194159385997130/AnsiballZ_file.py'
Jan 27 08:39:11 compute-0 sudo[146085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:11 compute-0 python3.9[146087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:12 compute-0 sudo[146085]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:12 compute-0 sudo[146237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmboszzcarfpfpsmytldyhnzrbkylxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503152.276997-1274-96742888255714/AnsiballZ_stat.py'
Jan 27 08:39:12 compute-0 sudo[146237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:12 compute-0 python3.9[146239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:12 compute-0 sudo[146237]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:12 compute-0 ceph-mon[74357]: pgmap v452: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:13.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:13.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:13 compute-0 sudo[146316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhxczkyiqqgwhmpysssvkbhdivuavvmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503152.276997-1274-96742888255714/AnsiballZ_file.py'
Jan 27 08:39:13 compute-0 sudo[146316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:13 compute-0 python3.9[146318]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:13 compute-0 sudo[146316]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:13 compute-0 sudo[146468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksmikmnusgwevaoejojytdegfazatlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503153.6719825-1310-7402632545639/AnsiballZ_systemd.py'
Jan 27 08:39:13 compute-0 sudo[146468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:14 compute-0 python3.9[146470]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:39:14 compute-0 systemd[1]: Reloading.
Jan 27 08:39:14 compute-0 systemd-rc-local-generator[146496]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:39:14 compute-0 systemd-sysv-generator[146500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:39:14 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 08:39:14 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:39:14 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:39:14 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 08:39:14 compute-0 sudo[146468]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:14 compute-0 ceph-mon[74357]: pgmap v453: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:39:14
Jan 27 08:39:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:39:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:39:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Jan 27 08:39:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:39:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:15.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:39:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:39:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:15.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:15 compute-0 sudo[146662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ropwkonogmqyufnsskfnkuzdtdzosfos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503154.955219-1340-47826697450699/AnsiballZ_file.py'
Jan 27 08:39:15 compute-0 sudo[146662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:15 compute-0 python3.9[146664]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:15 compute-0 sudo[146662]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:15 compute-0 sudo[146814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbniueqaoyyhmwpeggvidghfjyfvrtuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503155.6551898-1364-198416742104872/AnsiballZ_stat.py'
Jan 27 08:39:15 compute-0 sudo[146814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:16 compute-0 python3.9[146816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:16 compute-0 sudo[146814]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:16 compute-0 sudo[146861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:16 compute-0 sudo[146861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:16 compute-0 sudo[146861]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:16 compute-0 sudo[146907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:16 compute-0 sudo[146907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:16 compute-0 sudo[146907]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:16 compute-0 sudo[146987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugklagwvwpinqkzbdawkcfemtpnginq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503155.6551898-1364-198416742104872/AnsiballZ_copy.py'
Jan 27 08:39:16 compute-0 sudo[146987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:16 compute-0 python3.9[146989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503155.6551898-1364-198416742104872/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:16 compute-0 sudo[146987]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:16 compute-0 ceph-mon[74357]: pgmap v454: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:17.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:17.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:17 compute-0 sudo[147140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwubueyinixeclupeugpddgwixwipsiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503157.3056343-1415-231489429283371/AnsiballZ_file.py'
Jan 27 08:39:17 compute-0 sudo[147140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:17 compute-0 python3.9[147142]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:17 compute-0 sudo[147140]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:18 compute-0 sudo[147292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdagizxshyqhlavajvrprcefklxrapto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503158.112564-1439-281353089072099/AnsiballZ_file.py'
Jan 27 08:39:18 compute-0 sudo[147292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:18 compute-0 python3.9[147294]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:18 compute-0 sudo[147292]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:18 compute-0 ceph-mon[74357]: pgmap v455: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:39:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:19.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:39:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:19.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:19 compute-0 ceph-mon[74357]: pgmap v456: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:20 compute-0 sudo[147445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfvdokfvlhnpumpmpplbihgzlexxxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503160.3895528-1463-182203158160394/AnsiballZ_stat.py'
Jan 27 08:39:20 compute-0 sudo[147445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:20 compute-0 python3.9[147447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:20 compute-0 sudo[147445]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:21.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:21 compute-0 sudo[147569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelcluqsvxrsqvrssjvzsrjwdaxgthmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503160.3895528-1463-182203158160394/AnsiballZ_copy.py'
Jan 27 08:39:21 compute-0 sudo[147569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:21 compute-0 python3.9[147571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503160.3895528-1463-182203158160394/.source.json _original_basename=.qumr67r4 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:21 compute-0 sudo[147569]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:22 compute-0 ceph-mon[74357]: pgmap v457: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:22 compute-0 python3.9[147721]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:23.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:23.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:24 compute-0 ceph-mon[74357]: pgmap v458: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:39:24 compute-0 sudo[148144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kineqfjfufmpuzkzkxpizfbtbrmctatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503164.0502641-1583-92372680415815/AnsiballZ_container_config_data.py'
Jan 27 08:39:24 compute-0 sudo[148144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:24 compute-0 python3.9[148146]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 27 08:39:24 compute-0 sudo[148144]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:25.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:25.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:26 compute-0 sudo[148297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyoouqowfyndxevgedesvmpwshrqvsyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503165.3897462-1616-213502319951003/AnsiballZ_container_config_hash.py'
Jan 27 08:39:26 compute-0 sudo[148297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:26 compute-0 ceph-mon[74357]: pgmap v459: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:26 compute-0 python3.9[148299]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:39:26 compute-0 sudo[148297]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:27.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:27 compute-0 sudo[148450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhglsndgqxspxmsyccpbascckipvjwor ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503166.6658278-1646-88395850099054/AnsiballZ_edpm_container_manage.py'
Jan 27 08:39:27 compute-0 sudo[148450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:27 compute-0 python3[148452]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:39:28 compute-0 ceph-mon[74357]: pgmap v460: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:29.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:29.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:30 compute-0 ceph-mon[74357]: pgmap v461: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:39:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5895 writes, 25K keys, 5895 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5895 writes, 971 syncs, 6.07 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5895 writes, 25K keys, 5895 commit groups, 1.0 writes per commit group, ingest: 19.01 MB, 0.03 MB/s
                                           Interval WAL: 5895 writes, 971 syncs, 6.07 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 08:39:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:31.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:32 compute-0 ceph-mon[74357]: pgmap v462: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:32 compute-0 podman[148466]: 2026-01-27 08:39:32.784018639 +0000 UTC m=+5.001379566 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 08:39:32 compute-0 podman[148586]: 2026-01-27 08:39:32.956651415 +0000 UTC m=+0.054388632 container create 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, tcib_managed=true)
Jan 27 08:39:32 compute-0 podman[148586]: 2026-01-27 08:39:32.926519173 +0000 UTC m=+0.024256390 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 08:39:32 compute-0 python3[148452]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 08:39:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:33 compute-0 sudo[148450]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:33.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:33 compute-0 sudo[148776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmkhuwplbyzaiwbzidotsekjoguzgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503173.3851008-1670-46165923880966/AnsiballZ_stat.py'
Jan 27 08:39:33 compute-0 sudo[148776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:33 compute-0 python3.9[148778]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:39:33 compute-0 sudo[148776]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:34 compute-0 sudo[148930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cghyuhiqffdnawcuatmbrjtvdlnmiijt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503174.2225785-1697-236582729639937/AnsiballZ_file.py'
Jan 27 08:39:34 compute-0 sudo[148930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:34 compute-0 ceph-mon[74357]: pgmap v463: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:34 compute-0 python3.9[148932]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:34 compute-0 sudo[148930]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:34 compute-0 sudo[149006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqymcshprirwmqzgasvemlvmlechaopb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503174.2225785-1697-236582729639937/AnsiballZ_stat.py'
Jan 27 08:39:34 compute-0 sudo[149006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:35 compute-0 python3.9[149008]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:39:35 compute-0 sudo[149006]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:35 compute-0 sudo[149158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbwxrefwgzelcsksiolcmrhthuqztzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503175.2443428-1697-72304052814596/AnsiballZ_copy.py'
Jan 27 08:39:35 compute-0 sudo[149158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:35 compute-0 python3.9[149160]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769503175.2443428-1697-72304052814596/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:35 compute-0 sudo[149158]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:36 compute-0 sudo[149234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiqlsyxijeosrlpczilcqqskhwrmwfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503175.2443428-1697-72304052814596/AnsiballZ_systemd.py'
Jan 27 08:39:36 compute-0 sudo[149234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Check health
Jan 27 08:39:36 compute-0 sudo[149237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:36 compute-0 sudo[149237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:36 compute-0 sudo[149237]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:36 compute-0 python3.9[149236]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:39:36 compute-0 systemd[1]: Reloading.
Jan 27 08:39:36 compute-0 systemd-rc-local-generator[149311]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:39:36 compute-0 systemd-sysv-generator[149316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:39:36 compute-0 sudo[149262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:36 compute-0 sudo[149262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:36 compute-0 sudo[149262]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:36 compute-0 sudo[149234]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:36 compute-0 ceph-mon[74357]: pgmap v464: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:37 compute-0 sudo[149395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnpioyeqvddfgextsrbcgajkkuigtwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503175.2443428-1697-72304052814596/AnsiballZ_systemd.py'
Jan 27 08:39:37 compute-0 sudo[149395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:37.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:37.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:37 compute-0 python3.9[149398]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:39:37 compute-0 systemd[1]: Reloading.
Jan 27 08:39:37 compute-0 systemd-sysv-generator[149431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:39:37 compute-0 systemd-rc-local-generator[149428]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:39:37 compute-0 systemd[1]: Starting ovn_controller container...
Jan 27 08:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48ba565cf55d3d541d226b3681b7faab28cf9b25e231dbe3d43b772629d7d51/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3.
Jan 27 08:39:38 compute-0 podman[149439]: 2026-01-27 08:39:38.110578601 +0000 UTC m=+0.131051509 container init 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + sudo -E kolla_set_configs
Jan 27 08:39:38 compute-0 podman[149439]: 2026-01-27 08:39:38.137624298 +0000 UTC m=+0.158097196 container start 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 08:39:38 compute-0 edpm-start-podman-container[149439]: ovn_controller
Jan 27 08:39:38 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 27 08:39:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 27 08:39:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 27 08:39:38 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 27 08:39:38 compute-0 systemd[149487]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 27 08:39:38 compute-0 edpm-start-podman-container[149438]: Creating additional drop-in dependency for "ovn_controller" (4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3)
Jan 27 08:39:38 compute-0 podman[149462]: 2026-01-27 08:39:38.211659561 +0000 UTC m=+0.063552236 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 27 08:39:38 compute-0 systemd[1]: 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3-566a64cf890fb7e8.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 08:39:38 compute-0 systemd[1]: 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3-566a64cf890fb7e8.service: Failed with result 'exit-code'.
Jan 27 08:39:38 compute-0 systemd[1]: Reloading.
Jan 27 08:39:38 compute-0 systemd-rc-local-generator[149536]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:39:38 compute-0 systemd-sysv-generator[149539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:39:38 compute-0 systemd[149487]: Queued start job for default target Main User Target.
Jan 27 08:39:38 compute-0 systemd[149487]: Created slice User Application Slice.
Jan 27 08:39:38 compute-0 systemd[149487]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 27 08:39:38 compute-0 systemd[149487]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 08:39:38 compute-0 systemd[149487]: Reached target Paths.
Jan 27 08:39:38 compute-0 systemd[149487]: Reached target Timers.
Jan 27 08:39:38 compute-0 systemd[149487]: Starting D-Bus User Message Bus Socket...
Jan 27 08:39:38 compute-0 systemd[149487]: Starting Create User's Volatile Files and Directories...
Jan 27 08:39:38 compute-0 systemd[149487]: Finished Create User's Volatile Files and Directories.
Jan 27 08:39:38 compute-0 systemd[149487]: Listening on D-Bus User Message Bus Socket.
Jan 27 08:39:38 compute-0 systemd[149487]: Reached target Sockets.
Jan 27 08:39:38 compute-0 systemd[149487]: Reached target Basic System.
Jan 27 08:39:38 compute-0 systemd[149487]: Reached target Main User Target.
Jan 27 08:39:38 compute-0 systemd[149487]: Startup finished in 161ms.
Jan 27 08:39:38 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 27 08:39:38 compute-0 systemd[1]: Started ovn_controller container.
Jan 27 08:39:38 compute-0 systemd[1]: Started Session c1 of User root.
Jan 27 08:39:38 compute-0 sudo[149395]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:38 compute-0 ovn_controller[149455]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:39:38 compute-0 ovn_controller[149455]: INFO:__main__:Validating config file
Jan 27 08:39:38 compute-0 ovn_controller[149455]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:39:38 compute-0 ovn_controller[149455]: INFO:__main__:Writing out command to execute
Jan 27 08:39:38 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: ++ cat /run_command
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + ARGS=
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + sudo kolla_copy_cacerts
Jan 27 08:39:38 compute-0 systemd[1]: Started Session c2 of User root.
Jan 27 08:39:38 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + [[ ! -n '' ]]
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + . kolla_extend_start
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 27 08:39:38 compute-0 ovn_controller[149455]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + umask 0022
Jan 27 08:39:38 compute-0 ovn_controller[149455]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6532] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6540] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <warn>  [1769503178.6543] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6552] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6558] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6562] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 08:39:38 compute-0 kernel: br-int: entered promiscuous mode
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:39:38 compute-0 ovn_controller[149455]: 2026-01-27T08:39:38Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6878] manager: (ovn-d032c5-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6884] manager: (ovn-96b682-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.6890] manager: (ovn-a901be-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 27 08:39:38 compute-0 systemd-udevd[149583]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:39:38 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 27 08:39:38 compute-0 systemd-udevd[149585]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.7026] device (genev_sys_6081): carrier: link connected
Jan 27 08:39:38 compute-0 NetworkManager[48994]: <info>  [1769503178.7028] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 27 08:39:38 compute-0 ceph-mon[74357]: pgmap v465: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:39:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:39:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:39.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:39 compute-0 python3.9[149714]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 08:39:39 compute-0 ceph-mon[74357]: pgmap v466: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:40 compute-0 sudo[149864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqtgmwcpmizothxswmofakeglgusrmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503180.1513746-1832-191585028290182/AnsiballZ_stat.py'
Jan 27 08:39:40 compute-0 sudo[149864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:40 compute-0 python3.9[149866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:40 compute-0 sudo[149864]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:41 compute-0 sudo[149987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wixgwsohnbhohwikktlxgmhaveyzdlrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503180.1513746-1832-191585028290182/AnsiballZ_copy.py'
Jan 27 08:39:41 compute-0 sudo[149987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:41.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:41 compute-0 python3.9[149990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503180.1513746-1832-191585028290182/.source.yaml _original_basename=.8r1cyta3 follow=False checksum=583db66562417d9b5b38ec8176f4601e0e36983a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:39:41 compute-0 sudo[149987]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:41 compute-0 sudo[150140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkbyldmgcueidyaiabzxeupzkthdptss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503181.4323204-1877-280851015346630/AnsiballZ_command.py'
Jan 27 08:39:41 compute-0 sudo[150140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:41 compute-0 python3.9[150142]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:39:41 compute-0 ovs-vsctl[150143]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 27 08:39:41 compute-0 sudo[150140]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:42 compute-0 ceph-mon[74357]: pgmap v467: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:42 compute-0 sudo[150293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohxsxedfrixikpbwccilpgdborfjmsxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503182.212412-1901-230667853182380/AnsiballZ_command.py'
Jan 27 08:39:42 compute-0 sudo[150293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:42 compute-0 python3.9[150295]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:39:42 compute-0 ovs-vsctl[150297]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 27 08:39:42 compute-0 sudo[150293]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:43.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:43.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:43 compute-0 sudo[150449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrdzqjavdecezytqnobibcjveakvnuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503183.2174563-1943-264193552985669/AnsiballZ_command.py'
Jan 27 08:39:43 compute-0 sudo[150449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:43 compute-0 python3.9[150451]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:39:43 compute-0 ovs-vsctl[150452]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 27 08:39:43 compute-0 sudo[150449]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:44 compute-0 sshd-session[137660]: Connection closed by 192.168.122.30 port 45510
Jan 27 08:39:44 compute-0 sshd-session[137657]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:39:44 compute-0 systemd-logind[799]: Session 46 logged out. Waiting for processes to exit.
Jan 27 08:39:44 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 27 08:39:44 compute-0 systemd[1]: session-46.scope: Consumed 58.106s CPU time.
Jan 27 08:39:44 compute-0 systemd-logind[799]: Removed session 46.
Jan 27 08:39:44 compute-0 ceph-mon[74357]: pgmap v468: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:45.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:45.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:46 compute-0 ceph-mon[74357]: pgmap v469: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:46 compute-0 sudo[150478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:46 compute-0 sudo[150478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:46 compute-0 sudo[150478]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:46 compute-0 sudo[150503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:39:46 compute-0 sudo[150503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:46 compute-0 sudo[150503]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:47 compute-0 sudo[150528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:47 compute-0 sudo[150528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:47 compute-0 sudo[150528]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:47 compute-0 sudo[150553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:39:47 compute-0 sudo[150553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:47.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:39:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:39:47 compute-0 sudo[150553]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 27 08:39:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:39:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:39:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:39:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:39:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:39:48 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 27 08:39:48 compute-0 systemd[149487]: Activating special unit Exit the Session...
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped target Main User Target.
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped target Basic System.
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped target Paths.
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped target Sockets.
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped target Timers.
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 08:39:48 compute-0 systemd[149487]: Closed D-Bus User Message Bus Socket.
Jan 27 08:39:48 compute-0 systemd[149487]: Stopped Create User's Volatile Files and Directories.
Jan 27 08:39:48 compute-0 systemd[149487]: Removed slice User Application Slice.
Jan 27 08:39:48 compute-0 systemd[149487]: Reached target Shutdown.
Jan 27 08:39:48 compute-0 systemd[149487]: Finished Exit the Session.
Jan 27 08:39:48 compute-0 systemd[149487]: Reached target Exit the Session.
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 27 08:39:48 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 27 08:39:48 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 27 08:39:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 27 08:39:48 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 27 08:39:48 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 27 08:39:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 27 08:39:48 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 986716cf-bd0f-494f-a47b-4384b604003d does not exist
Jan 27 08:39:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d7d67978-476e-4432-b10e-2b819b1f4fe0 does not exist
Jan 27 08:39:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7c42d537-ba96-4a2f-b9bc-f5ad1404032d does not exist
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:39:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: pgmap v470: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:39:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:39:48 compute-0 sudo[150613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:48 compute-0 sudo[150613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:48 compute-0 sudo[150613]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:49 compute-0 sudo[150638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:39:49 compute-0 sudo[150638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:49 compute-0 sudo[150638]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:49 compute-0 sudo[150663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:49 compute-0 sudo[150663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:49 compute-0 sudo[150663]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:49.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:49 compute-0 sudo[150689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:39:49 compute-0 sudo[150689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:49.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:49 compute-0 sshd-session[150725]: Accepted publickey for zuul from 192.168.122.30 port 36304 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:39:49 compute-0 systemd-logind[799]: New session 48 of user zuul.
Jan 27 08:39:49 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 27 08:39:49 compute-0 sshd-session[150725]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.522762448 +0000 UTC m=+0.050977749 container create cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:39:49 compute-0 systemd[1]: Started libpod-conmon-cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717.scope.
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.495713491 +0000 UTC m=+0.023928852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:39:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.614782248 +0000 UTC m=+0.142997539 container init cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.625219376 +0000 UTC m=+0.153434647 container start cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.628777894 +0000 UTC m=+0.156993165 container attach cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:39:49 compute-0 funny_jemison[150797]: 167 167
Jan 27 08:39:49 compute-0 systemd[1]: libpod-cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717.scope: Deactivated successfully.
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.634907164 +0000 UTC m=+0.163122435 container died cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-af0f0d79ec09f2f8938ddb52a7e2c4e13aedcfad46d2cefb4f1480fe0f7fe653-merged.mount: Deactivated successfully.
Jan 27 08:39:49 compute-0 podman[150757]: 2026-01-27 08:39:49.676268975 +0000 UTC m=+0.204484246 container remove cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:39:49 compute-0 systemd[1]: libpod-conmon-cc8b5ae0265ef4d3e48ff0f62aa8e77b2ad61722ab87e2d77984556706d6d717.scope: Deactivated successfully.
Jan 27 08:39:49 compute-0 podman[150850]: 2026-01-27 08:39:49.841997341 +0000 UTC m=+0.043168433 container create eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:39:49 compute-0 systemd[1]: Started libpod-conmon-eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e.scope.
Jan 27 08:39:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:49 compute-0 podman[150850]: 2026-01-27 08:39:49.824348953 +0000 UTC m=+0.025520065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8252792330b9b36f95b9287aebe51d9a290fbf7d104351f2d3efadf835976fcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8252792330b9b36f95b9287aebe51d9a290fbf7d104351f2d3efadf835976fcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8252792330b9b36f95b9287aebe51d9a290fbf7d104351f2d3efadf835976fcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8252792330b9b36f95b9287aebe51d9a290fbf7d104351f2d3efadf835976fcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8252792330b9b36f95b9287aebe51d9a290fbf7d104351f2d3efadf835976fcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:49 compute-0 podman[150850]: 2026-01-27 08:39:49.937028923 +0000 UTC m=+0.138200035 container init eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:39:49 compute-0 podman[150850]: 2026-01-27 08:39:49.951043931 +0000 UTC m=+0.152215033 container start eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:39:49 compute-0 podman[150850]: 2026-01-27 08:39:49.954443945 +0000 UTC m=+0.155615037 container attach eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 08:39:50 compute-0 python3.9[150968]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:39:50 compute-0 objective_curie[150866]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:39:50 compute-0 objective_curie[150866]: --> relative data size: 1.0
Jan 27 08:39:50 compute-0 objective_curie[150866]: --> All data devices are unavailable
Jan 27 08:39:50 compute-0 systemd[1]: libpod-eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e.scope: Deactivated successfully.
Jan 27 08:39:50 compute-0 podman[150850]: 2026-01-27 08:39:50.781632089 +0000 UTC m=+0.982803221 container died eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8252792330b9b36f95b9287aebe51d9a290fbf7d104351f2d3efadf835976fcc-merged.mount: Deactivated successfully.
Jan 27 08:39:50 compute-0 podman[150850]: 2026-01-27 08:39:50.844204216 +0000 UTC m=+1.045375308 container remove eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 08:39:50 compute-0 systemd[1]: libpod-conmon-eb0ffa11fd78d20103c1e29db8041052bce26bbc88fd6f6bf9b79cd2c600df1e.scope: Deactivated successfully.
Jan 27 08:39:50 compute-0 sudo[150689]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:50 compute-0 ceph-mon[74357]: pgmap v471: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:50 compute-0 sudo[151022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:50 compute-0 sudo[151022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:50 compute-0 sudo[151022]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:51 compute-0 sudo[151047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:39:51 compute-0 sudo[151047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:51 compute-0 sudo[151047]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:51 compute-0 sudo[151073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:51 compute-0 sudo[151073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:51 compute-0 sudo[151073]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:51 compute-0 sudo[151121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:39:51 compute-0 sudo[151121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:51.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.552553441 +0000 UTC m=+0.054337821 container create cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:39:51 compute-0 systemd[1]: Started libpod-conmon-cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521.scope.
Jan 27 08:39:51 compute-0 sudo[151305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufudhwcqwtjgrbuwscqwglfcjuqwbxxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503191.112813-62-90783956500176/AnsiballZ_file.py'
Jan 27 08:39:51 compute-0 sudo[151305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.529929636 +0000 UTC m=+0.031713996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:39:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.648214761 +0000 UTC m=+0.149999101 container init cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.657281422 +0000 UTC m=+0.159065802 container start cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.661226361 +0000 UTC m=+0.163010701 container attach cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:39:51 compute-0 kind_hertz[151309]: 167 167
Jan 27 08:39:51 compute-0 systemd[1]: libpod-cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521.scope: Deactivated successfully.
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.666446325 +0000 UTC m=+0.168230675 container died cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2000cf6b848e74e2859e4729704e961cfdd77eaabdec93f01f427b7a7949dbc-merged.mount: Deactivated successfully.
Jan 27 08:39:51 compute-0 podman[151264]: 2026-01-27 08:39:51.700346081 +0000 UTC m=+0.202130431 container remove cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:39:51 compute-0 systemd[1]: libpod-conmon-cd6e95275633e07407403805aa60c60087a801fe854353a5f500a684d9c90521.scope: Deactivated successfully.
Jan 27 08:39:51 compute-0 python3.9[151311]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:51 compute-0 sudo[151305]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:51 compute-0 podman[151335]: 2026-01-27 08:39:51.904526858 +0000 UTC m=+0.050740872 container create fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:39:51 compute-0 systemd[1]: Started libpod-conmon-fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6.scope.
Jan 27 08:39:51 compute-0 podman[151335]: 2026-01-27 08:39:51.883577849 +0000 UTC m=+0.029791883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:39:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5106898f9f2eb3b3f58d36c9fd0b2b3ac96095f609b33862f0af66aae0bc75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5106898f9f2eb3b3f58d36c9fd0b2b3ac96095f609b33862f0af66aae0bc75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5106898f9f2eb3b3f58d36c9fd0b2b3ac96095f609b33862f0af66aae0bc75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5106898f9f2eb3b3f58d36c9fd0b2b3ac96095f609b33862f0af66aae0bc75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:52 compute-0 podman[151335]: 2026-01-27 08:39:52.011150511 +0000 UTC m=+0.157364625 container init fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:39:52 compute-0 podman[151335]: 2026-01-27 08:39:52.024464629 +0000 UTC m=+0.170678633 container start fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:39:52 compute-0 podman[151335]: 2026-01-27 08:39:52.027686227 +0000 UTC m=+0.173900261 container attach fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:39:52 compute-0 ceph-mon[74357]: pgmap v472: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:52 compute-0 sudo[151505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmkxbmlviuufnsnfulvncfsiesodotlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503192.009957-62-144020788334986/AnsiballZ_file.py'
Jan 27 08:39:52 compute-0 sudo[151505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:52 compute-0 python3.9[151507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:52 compute-0 sudo[151505]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:52 compute-0 hungry_colden[151377]: {
Jan 27 08:39:52 compute-0 hungry_colden[151377]:     "0": [
Jan 27 08:39:52 compute-0 hungry_colden[151377]:         {
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "devices": [
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "/dev/loop3"
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             ],
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "lv_name": "ceph_lv0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "lv_size": "7511998464",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "name": "ceph_lv0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "tags": {
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.cluster_name": "ceph",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.crush_device_class": "",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.encrypted": "0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.osd_id": "0",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.type": "block",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:                 "ceph.vdo": "0"
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             },
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "type": "block",
Jan 27 08:39:52 compute-0 hungry_colden[151377]:             "vg_name": "ceph_vg0"
Jan 27 08:39:52 compute-0 hungry_colden[151377]:         }
Jan 27 08:39:52 compute-0 hungry_colden[151377]:     ]
Jan 27 08:39:52 compute-0 hungry_colden[151377]: }
Jan 27 08:39:52 compute-0 systemd[1]: libpod-fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6.scope: Deactivated successfully.
Jan 27 08:39:52 compute-0 podman[151335]: 2026-01-27 08:39:52.814289562 +0000 UTC m=+0.960503626 container died fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 08:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5106898f9f2eb3b3f58d36c9fd0b2b3ac96095f609b33862f0af66aae0bc75-merged.mount: Deactivated successfully.
Jan 27 08:39:52 compute-0 podman[151335]: 2026-01-27 08:39:52.969527097 +0000 UTC m=+1.115741111 container remove fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:39:52 compute-0 systemd[1]: libpod-conmon-fed395f824bef7ee58c98979e06842d4ce281e3eb0b2771a04e59b475b9476b6.scope: Deactivated successfully.
Jan 27 08:39:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:53 compute-0 sudo[151121]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:53 compute-0 sudo[151646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:53 compute-0 sudo[151646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:53 compute-0 sudo[151646]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:53 compute-0 sudo[151697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htwyoofurzuigzerogtayilbmmhohylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503192.7442195-62-127578703702633/AnsiballZ_file.py'
Jan 27 08:39:53 compute-0 sudo[151697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:53 compute-0 sudo[151700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:39:53 compute-0 sudo[151700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:53 compute-0 sudo[151700]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:53 compute-0 sudo[151726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:53.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:53 compute-0 sudo[151726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:53 compute-0 sudo[151726]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:53 compute-0 sudo[151751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:39:53 compute-0 sudo[151751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:53 compute-0 python3.9[151702]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:53 compute-0 sudo[151697]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.617821013 +0000 UTC m=+0.040846868 container create be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:39:53 compute-0 systemd[1]: Started libpod-conmon-be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6.scope.
Jan 27 08:39:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.600594198 +0000 UTC m=+0.023620073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.699856978 +0000 UTC m=+0.122882853 container init be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.706311306 +0000 UTC m=+0.129337161 container start be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.710548424 +0000 UTC m=+0.133574299 container attach be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:39:53 compute-0 gifted_ramanujan[151935]: 167 167
Jan 27 08:39:53 compute-0 systemd[1]: libpod-be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6.scope: Deactivated successfully.
Jan 27 08:39:53 compute-0 conmon[151935]: conmon be088a50516ad4568ffb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6.scope/container/memory.events
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.714870093 +0000 UTC m=+0.137895948 container died be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aa533446274c56af06f14698f56834bc33be2c8d675b74b8f23470fc7c6a9c9-merged.mount: Deactivated successfully.
Jan 27 08:39:53 compute-0 podman[151892]: 2026-01-27 08:39:53.755419962 +0000 UTC m=+0.178445817 container remove be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:39:53 compute-0 systemd[1]: libpod-conmon-be088a50516ad4568ffbcce2167968dd851b7a4c0dcc94b62ad6f4d0b0d706e6.scope: Deactivated successfully.
Jan 27 08:39:53 compute-0 sudo[152002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecguanatesegmlrphufrzstnfholpwxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503193.4582691-62-32485610116722/AnsiballZ_file.py'
Jan 27 08:39:53 compute-0 sudo[152002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:53 compute-0 podman[152010]: 2026-01-27 08:39:53.925156618 +0000 UTC m=+0.047548064 container create 58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:39:53 compute-0 systemd[1]: Started libpod-conmon-58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33.scope.
Jan 27 08:39:53 compute-0 python3.9[152004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:39:53 compute-0 podman[152010]: 2026-01-27 08:39:53.90350763 +0000 UTC m=+0.025899096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79146273aa310172a52ecd8c25a3504b41b603a0b5aeb21e136a069b3d254e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79146273aa310172a52ecd8c25a3504b41b603a0b5aeb21e136a069b3d254e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79146273aa310172a52ecd8c25a3504b41b603a0b5aeb21e136a069b3d254e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79146273aa310172a52ecd8c25a3504b41b603a0b5aeb21e136a069b3d254e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:54 compute-0 podman[152010]: 2026-01-27 08:39:54.011017078 +0000 UTC m=+0.133408544 container init 58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:39:54 compute-0 sudo[152002]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:54 compute-0 podman[152010]: 2026-01-27 08:39:54.02159753 +0000 UTC m=+0.143988976 container start 58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:39:54 compute-0 podman[152010]: 2026-01-27 08:39:54.025925799 +0000 UTC m=+0.148317275 container attach 58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 27 08:39:54 compute-0 ceph-mon[74357]: pgmap v473: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:54 compute-0 sudo[152180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bijtscvycuiqoarydavxpherttlteqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503194.1469495-62-14362174276152/AnsiballZ_file.py'
Jan 27 08:39:54 compute-0 sudo[152180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:54 compute-0 python3.9[152182]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:54 compute-0 sudo[152180]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:54 compute-0 kind_rosalind[152026]: {
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:         "osd_id": 0,
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:         "type": "bluestore"
Jan 27 08:39:54 compute-0 kind_rosalind[152026]:     }
Jan 27 08:39:54 compute-0 kind_rosalind[152026]: }
Jan 27 08:39:54 compute-0 systemd[1]: libpod-58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33.scope: Deactivated successfully.
Jan 27 08:39:54 compute-0 podman[152010]: 2026-01-27 08:39:54.879858593 +0000 UTC m=+1.002250039 container died 58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b79146273aa310172a52ecd8c25a3504b41b603a0b5aeb21e136a069b3d254e6-merged.mount: Deactivated successfully.
Jan 27 08:39:54 compute-0 podman[152010]: 2026-01-27 08:39:54.945969988 +0000 UTC m=+1.068361434 container remove 58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:39:54 compute-0 systemd[1]: libpod-conmon-58f18127550b0852e01ed50624211e595125587e9f6ab9a9f1889330ad566f33.scope: Deactivated successfully.
Jan 27 08:39:54 compute-0 sudo[151751]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:54 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:39:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:39:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:55 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d3c312dc-e558-4b7e-a22d-ba4355763efd does not exist
Jan 27 08:39:55 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev aa704860-e6ac-43dc-94d0-200a2e2f77f2 does not exist
Jan 27 08:39:55 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8c7ad564-8bc7-4d75-9bb9-10bd8991b41e does not exist
Jan 27 08:39:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:55 compute-0 sudo[152335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:55 compute-0 sudo[152335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:55 compute-0 sudo[152335]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:55 compute-0 sudo[152384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:39:55 compute-0 sudo[152384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:55 compute-0 sudo[152384]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:55.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:55 compute-0 python3.9[152392]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:39:56 compute-0 ceph-mon[74357]: pgmap v474: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:56 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:56 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:39:56 compute-0 sudo[152562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoxcjfgkeqrjxlknqfcchriiaeldkimy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503195.6708698-194-60703510452281/AnsiballZ_seboolean.py'
Jan 27 08:39:56 compute-0 sudo[152562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:39:56 compute-0 python3.9[152564]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 08:39:56 compute-0 sudo[152566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:56 compute-0 sudo[152566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:56 compute-0 sudo[152566]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:56 compute-0 sudo[152591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:39:56 compute-0 sudo[152591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:39:56 compute-0 sudo[152591]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:57 compute-0 sudo[152562]: pam_unix(sudo:session): session closed for user root
Jan 27 08:39:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:57.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:39:57 compute-0 python3.9[152766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:58 compute-0 ceph-mon[74357]: pgmap v475: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:58 compute-0 python3.9[152887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503197.2697308-218-193765689150070/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:39:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:39:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:39:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:39:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:39:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:39:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:39:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:39:59.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:39:59 compute-0 python3.9[153038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:39:59 compute-0 python3.9[153159]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503198.8269498-263-279536160787921/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:00 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 08:40:00 compute-0 sudo[153309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsnemskctixxgufcamyjxtuzilxgkbwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503200.3136618-314-173394291672541/AnsiballZ_setup.py'
Jan 27 08:40:00 compute-0 sudo[153309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:00 compute-0 ceph-mon[74357]: pgmap v476: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:00 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 08:40:00 compute-0 python3.9[153311]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:40:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:01 compute-0 sudo[153309]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:01 compute-0 sudo[153395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thrabmzhqaqbbzlxgucyikyughyrsqij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503200.3136618-314-173394291672541/AnsiballZ_dnf.py'
Jan 27 08:40:01 compute-0 sudo[153395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:01 compute-0 python3.9[153397]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:40:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:02 compute-0 ceph-mon[74357]: pgmap v477: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:03 compute-0 sudo[153395]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:04 compute-0 sudo[153551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fztkixkqxoyalaruspqediqppwbhlikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503203.4816937-350-257805867309626/AnsiballZ_systemd.py'
Jan 27 08:40:04 compute-0 sudo[153551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:04 compute-0 python3.9[153553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:40:04 compute-0 sudo[153551]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:05.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:05 compute-0 ceph-mon[74357]: pgmap v478: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:05 compute-0 python3.9[153707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:06 compute-0 python3.9[153828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503205.0011802-374-69732768901596/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.438650) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503206438769, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1597, "num_deletes": 252, "total_data_size": 2944216, "memory_usage": 2988936, "flush_reason": "Manual Compaction"}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503206454930, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2880029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10738, "largest_seqno": 12334, "table_properties": {"data_size": 2872616, "index_size": 4420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14905, "raw_average_key_size": 19, "raw_value_size": 2857789, "raw_average_value_size": 3785, "num_data_blocks": 198, "num_entries": 755, "num_filter_entries": 755, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503037, "oldest_key_time": 1769503037, "file_creation_time": 1769503206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 16316 microseconds, and 8030 cpu microseconds.
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.454999) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2880029 bytes OK
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.455029) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.457001) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.457022) EVENT_LOG_v1 {"time_micros": 1769503206457016, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.457047) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2937537, prev total WAL file size 2937537, number of live WAL files 2.
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.458280) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2812KB)], [26(7569KB)]
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503206458383, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10630764, "oldest_snapshot_seqno": -1}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4006 keys, 8457222 bytes, temperature: kUnknown
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503206513318, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8457222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8427638, "index_size": 18466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97151, "raw_average_key_size": 24, "raw_value_size": 8352497, "raw_average_value_size": 2084, "num_data_blocks": 799, "num_entries": 4006, "num_filter_entries": 4006, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.513700) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8457222 bytes
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.514929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.0 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.6) write-amplify(2.9) OK, records in: 4528, records dropped: 522 output_compression: NoCompression
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.514960) EVENT_LOG_v1 {"time_micros": 1769503206514945, "job": 10, "event": "compaction_finished", "compaction_time_micros": 55082, "compaction_time_cpu_micros": 23649, "output_level": 6, "num_output_files": 1, "total_output_size": 8457222, "num_input_records": 4528, "num_output_records": 4006, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503206516489, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503206519670, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.458147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.519942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.519952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.519955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.519958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:40:06 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:40:06.519961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:40:06 compute-0 ceph-mon[74357]: pgmap v479: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:06 compute-0 python3.9[153978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:07.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:07 compute-0 python3.9[154100]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503206.3671227-374-195985541302129/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:08 compute-0 ovn_controller[149455]: 2026-01-27T08:40:08Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Jan 27 08:40:08 compute-0 ovn_controller[149455]: 2026-01-27T08:40:08Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 27 08:40:08 compute-0 podman[154224]: 2026-01-27 08:40:08.654034003 +0000 UTC m=+0.096460144 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 27 08:40:08 compute-0 ceph-mon[74357]: pgmap v480: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:08 compute-0 python3.9[154261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:40:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:09.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:40:09 compute-0 python3.9[154396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503208.2569978-506-265545794751550/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:09 compute-0 python3.9[154546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:10 compute-0 python3.9[154667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503209.4622376-506-226093347860235/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:10 compute-0 ceph-mon[74357]: pgmap v481: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:11 compute-0 python3.9[154817]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:40:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:11.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:11 compute-0 sudo[154970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohcvqlnrgipcahmasontvpxebfclion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503211.5403712-620-8233325403270/AnsiballZ_file.py'
Jan 27 08:40:11 compute-0 sudo[154970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:12 compute-0 python3.9[154972]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:12 compute-0 sudo[154970]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:12 compute-0 sudo[155122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcqitnaumepgxyvbdqspsezavklituan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503212.2458124-644-255703358626591/AnsiballZ_stat.py'
Jan 27 08:40:12 compute-0 sudo[155122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:12 compute-0 python3.9[155124]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:12 compute-0 sudo[155122]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:12 compute-0 ceph-mon[74357]: pgmap v482: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:12 compute-0 sudo[155200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlhizbjdmdkglmlptyilnxdareebcuwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503212.2458124-644-255703358626591/AnsiballZ_file.py'
Jan 27 08:40:12 compute-0 sudo[155200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:13 compute-0 python3.9[155202]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:13 compute-0 sudo[155200]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:13.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:13 compute-0 sudo[155353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbpyndtnyrqufremzkzbvmdhcgepcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503213.3641691-644-278112039318781/AnsiballZ_stat.py'
Jan 27 08:40:13 compute-0 sudo[155353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:13 compute-0 python3.9[155355]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:13 compute-0 sudo[155353]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:14 compute-0 sudo[155431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzwxtalsbbcxvcqxtuqupuqdapyvjrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503213.3641691-644-278112039318781/AnsiballZ_file.py'
Jan 27 08:40:14 compute-0 sudo[155431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:14 compute-0 python3.9[155433]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:14 compute-0 sudo[155431]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:14 compute-0 ceph-mon[74357]: pgmap v483: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:14 compute-0 sudo[155583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhsmerdfjcrsjkxbpjsdjhpalnozslh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503214.5929198-713-88762976242354/AnsiballZ_file.py'
Jan 27 08:40:14 compute-0 sudo[155583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:40:14
Jan 27 08:40:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:40:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:40:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root']
Jan 27 08:40:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:40:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:40:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:15 compute-0 python3.9[155585]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:15.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:15 compute-0 sudo[155583]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:15 compute-0 sudo[155736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwvyggpvvbvjxgieuwvgunfvtljvfrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503215.294795-737-146499886039367/AnsiballZ_stat.py'
Jan 27 08:40:15 compute-0 sudo[155736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:15 compute-0 python3.9[155738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:15 compute-0 sudo[155736]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:16 compute-0 sudo[155814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsbxhnwidbloaubqjgfltfxuejyfibaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503215.294795-737-146499886039367/AnsiballZ_file.py'
Jan 27 08:40:16 compute-0 sudo[155814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:16 compute-0 python3.9[155816]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:16 compute-0 sudo[155814]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:16 compute-0 ceph-mon[74357]: pgmap v484: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:16 compute-0 sudo[155966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhpogupnaltcfrphhseeicrvpepqbjej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503216.5366907-773-179269154976913/AnsiballZ_stat.py'
Jan 27 08:40:16 compute-0 sudo[155966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:17 compute-0 python3.9[155968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:17 compute-0 sudo[155969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:17 compute-0 sudo[155969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:17 compute-0 sudo[155969]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:17 compute-0 sudo[155966]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:17 compute-0 sudo[155997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:17 compute-0 sudo[155997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:17 compute-0 sudo[155997]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:17 compute-0 sudo[156095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqtlmxcbcdtgaoqfzmcimjhcjimtuyxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503216.5366907-773-179269154976913/AnsiballZ_file.py'
Jan 27 08:40:17 compute-0 sudo[156095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:17 compute-0 python3.9[156097]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:17 compute-0 sudo[156095]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:18 compute-0 sudo[156247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqxnsmxeidsxozwytzgxmihzvnzdezxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503217.7726004-809-172345539098969/AnsiballZ_systemd.py'
Jan 27 08:40:18 compute-0 sudo[156247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:18 compute-0 python3.9[156249]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:40:18 compute-0 systemd[1]: Reloading.
Jan 27 08:40:18 compute-0 systemd-rc-local-generator[156277]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:40:18 compute-0 systemd-sysv-generator[156280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:40:18 compute-0 sudo[156247]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:18 compute-0 ceph-mon[74357]: pgmap v485: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:19.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:19.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:20 compute-0 sudo[156438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deknozkomnzmiafaxnakyjckzplcjfab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503219.9190094-833-165501335348400/AnsiballZ_stat.py'
Jan 27 08:40:20 compute-0 sudo[156438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:20 compute-0 python3.9[156440]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:20 compute-0 sudo[156438]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:20 compute-0 sudo[156516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwqjdklyuelgvkebrcrdgagtcbxodwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503219.9190094-833-165501335348400/AnsiballZ_file.py'
Jan 27 08:40:20 compute-0 sudo[156516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:20 compute-0 python3.9[156518]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:20 compute-0 ceph-mon[74357]: pgmap v486: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:20 compute-0 sudo[156516]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:21.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:21 compute-0 sudo[156669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgaoumxciwfbzkowvtraxoqzrezttxxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503221.08306-869-238160390403956/AnsiballZ_stat.py'
Jan 27 08:40:21 compute-0 sudo[156669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:21 compute-0 python3.9[156671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:21 compute-0 sudo[156669]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:21 compute-0 sudo[156747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghjjcoknkgsppakwbiyivgvcrlvezzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503221.08306-869-238160390403956/AnsiballZ_file.py'
Jan 27 08:40:21 compute-0 sudo[156747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:22 compute-0 python3.9[156749]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:22 compute-0 sudo[156747]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:22 compute-0 sudo[156899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlermgkkiassffrhayesuisfygxrvdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503222.3038733-905-102480671233962/AnsiballZ_systemd.py'
Jan 27 08:40:22 compute-0 sudo[156899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:22 compute-0 python3.9[156901]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:40:22 compute-0 systemd[1]: Reloading.
Jan 27 08:40:22 compute-0 ceph-mon[74357]: pgmap v487: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:23 compute-0 systemd-rc-local-generator[156930]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:40:23 compute-0 systemd-sysv-generator[156933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:40:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:23 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 08:40:23 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:40:23 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:40:23 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 08:40:23 compute-0 sudo[156899]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:23 compute-0 sudo[157095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtmqmznvglhqdklxiqaagkwgtmakpfcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503223.5890453-935-26295777625472/AnsiballZ_file.py'
Jan 27 08:40:23 compute-0 sudo[157095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:23 compute-0 ceph-mon[74357]: pgmap v488: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:24 compute-0 python3.9[157097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:40:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:40:24 compute-0 sudo[157095]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:24 compute-0 sudo[157247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-albelsipzrklnmrviqykqxjpyhowrwkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503224.3495991-959-108052123535850/AnsiballZ_stat.py'
Jan 27 08:40:24 compute-0 sudo[157247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:24 compute-0 python3.9[157249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:24 compute-0 sudo[157247]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:25.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:40:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:25.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:40:25 compute-0 sudo[157371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cskoyeiqvatalcsavlmlbxbzrkraocjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503224.3495991-959-108052123535850/AnsiballZ_copy.py'
Jan 27 08:40:25 compute-0 sudo[157371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:25 compute-0 python3.9[157373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503224.3495991-959-108052123535850/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:25 compute-0 sudo[157371]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:26 compute-0 ceph-mon[74357]: pgmap v489: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:26 compute-0 sudo[157523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukrknxmjjhoaqzafhzuyrrqgiwodaubx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503226.0427904-1010-206994737793059/AnsiballZ_file.py'
Jan 27 08:40:26 compute-0 sudo[157523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:26 compute-0 python3.9[157525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:26 compute-0 sudo[157523]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:27.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:27 compute-0 sudo[157676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxtfjgscbibtljpyqluwoohjmkqsvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503226.8996277-1034-262928387821485/AnsiballZ_file.py'
Jan 27 08:40:27 compute-0 sudo[157676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:27.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:27 compute-0 python3.9[157678]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:40:27 compute-0 sudo[157676]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:28 compute-0 sudo[157828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgyymffwrgaaidoyksgicrvrdpkcvrpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503227.7192705-1058-138892829591424/AnsiballZ_stat.py'
Jan 27 08:40:28 compute-0 sudo[157828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:28 compute-0 ceph-mon[74357]: pgmap v490: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:28 compute-0 python3.9[157830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:28 compute-0 sudo[157828]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:28 compute-0 sudo[157951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymywpxhpfkyaivqaspnejheygsphbof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503227.7192705-1058-138892829591424/AnsiballZ_copy.py'
Jan 27 08:40:28 compute-0 sudo[157951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:28 compute-0 python3.9[157953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503227.7192705-1058-138892829591424/.source.json _original_basename=.4jdkghzl follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:28 compute-0 sudo[157951]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:29.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:29.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:29 compute-0 python3.9[158104]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:30 compute-0 ceph-mon[74357]: pgmap v491: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:31.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:31.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:31 compute-0 sudo[158526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtykxjubcwzzlpawprzsirxzkflcnmgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503231.2858498-1178-89486822347667/AnsiballZ_container_config_data.py'
Jan 27 08:40:31 compute-0 sudo[158526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:31 compute-0 python3.9[158528]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 27 08:40:31 compute-0 sudo[158526]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:32 compute-0 ceph-mon[74357]: pgmap v492: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:32 compute-0 sudo[158678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqdnwxytmtyanrqjmtjpgmjmkmpgqtdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503232.2813454-1211-280537152441745/AnsiballZ_container_config_hash.py'
Jan 27 08:40:32 compute-0 sudo[158678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:32 compute-0 python3.9[158680]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:40:32 compute-0 sudo[158678]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:33.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:33.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:33 compute-0 sudo[158831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miefbjshcgraewsvfirwvjtcpjgoowru ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503233.3285105-1241-82594915553587/AnsiballZ_edpm_container_manage.py'
Jan 27 08:40:33 compute-0 sudo[158831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:34 compute-0 python3[158833]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:40:34 compute-0 ceph-mon[74357]: pgmap v493: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:35.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:35.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:36 compute-0 ceph-mon[74357]: pgmap v494: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:37 compute-0 sudo[158898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:37 compute-0 sudo[158898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:37 compute-0 sudo[158898]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:37 compute-0 sudo[158923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:37 compute-0 sudo[158923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:37 compute-0 sudo[158923]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:38 compute-0 ceph-mon[74357]: pgmap v495: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:40 compute-0 ceph-mon[74357]: pgmap v496: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:40:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:40:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:41.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:41 compute-0 podman[158965]: 2026-01-27 08:40:41.9971909 +0000 UTC m=+2.801085626 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:40:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:43 compute-0 ceph-mon[74357]: pgmap v497: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:40:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:45.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:40:45 compute-0 ceph-mon[74357]: pgmap v498: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:46 compute-0 ceph-mon[74357]: pgmap v499: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:46 compute-0 podman[158846]: 2026-01-27 08:40:46.937927438 +0000 UTC m=+12.733573112 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:40:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:47 compute-0 podman[159053]: 2026-01-27 08:40:47.055676222 +0000 UTC m=+0.022582655 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:40:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:47 compute-0 podman[159053]: 2026-01-27 08:40:47.393617854 +0000 UTC m=+0.360524267 container create 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:40:47 compute-0 python3[158833]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:40:47 compute-0 sudo[158831]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:48 compute-0 sudo[159241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rncflugsnyugwpcdpaknqzrbpccvwbot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503247.723626-1265-239879835909733/AnsiballZ_stat.py'
Jan 27 08:40:48 compute-0 sudo[159241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:48 compute-0 python3.9[159243]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:40:48 compute-0 sudo[159241]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:48 compute-0 ceph-mon[74357]: pgmap v500: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:48 compute-0 sudo[159395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdepqmevgcjxeclkrjrgsnidwupwnmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503248.68246-1292-120681456704477/AnsiballZ_file.py'
Jan 27 08:40:48 compute-0 sudo[159395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:49 compute-0 python3.9[159397]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:40:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:40:49 compute-0 sudo[159395]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:49 compute-0 sudo[159472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhnnfagopcdztrbkmgzvcyouzzlcpfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503248.68246-1292-120681456704477/AnsiballZ_stat.py'
Jan 27 08:40:49 compute-0 sudo[159472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:49 compute-0 python3.9[159474]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:40:49 compute-0 sudo[159472]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:50 compute-0 sudo[159623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltibsbpwpksibqaaoegmeymlqlfjphac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503249.648805-1292-266083665608252/AnsiballZ_copy.py'
Jan 27 08:40:50 compute-0 sudo[159623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:50 compute-0 python3.9[159625]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769503249.648805-1292-266083665608252/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:50 compute-0 sudo[159623]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:50 compute-0 sudo[159699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggzddcdotepavtieuddzhiygwuxwsxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503249.648805-1292-266083665608252/AnsiballZ_systemd.py'
Jan 27 08:40:50 compute-0 sudo[159699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:50 compute-0 ceph-mon[74357]: pgmap v501: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:50 compute-0 python3.9[159701]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:40:50 compute-0 systemd[1]: Reloading.
Jan 27 08:40:50 compute-0 systemd-rc-local-generator[159724]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:40:50 compute-0 systemd-sysv-generator[159731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:40:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.003000082s ======
Jan 27 08:40:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:51.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Jan 27 08:40:51 compute-0 sudo[159699]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:51 compute-0 sudo[159811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-almetcugtngluaaivhyeagwfqckzcsca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503249.648805-1292-266083665608252/AnsiballZ_systemd.py'
Jan 27 08:40:51 compute-0 sudo[159811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:51 compute-0 python3.9[159813]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:40:51 compute-0 systemd[1]: Reloading.
Jan 27 08:40:51 compute-0 systemd-rc-local-generator[159844]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:40:51 compute-0 systemd-sysv-generator[159848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:40:52 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 27 08:40:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb17eb27e59f4c21bc57bc65361e19cccf17ce5047eab78d2449881637a8ec8e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb17eb27e59f4c21bc57bc65361e19cccf17ce5047eab78d2449881637a8ec8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089.
Jan 27 08:40:52 compute-0 podman[159855]: 2026-01-27 08:40:52.167855069 +0000 UTC m=+0.115528064 container init 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + sudo -E kolla_set_configs
Jan 27 08:40:52 compute-0 podman[159855]: 2026-01-27 08:40:52.198084935 +0000 UTC m=+0.145757860 container start 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 08:40:52 compute-0 edpm-start-podman-container[159855]: ovn_metadata_agent
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Validating config file
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Copying service configuration files
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Writing out command to execute
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 27 08:40:52 compute-0 edpm-start-podman-container[159854]: Creating additional drop-in dependency for "ovn_metadata_agent" (5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089)
Jan 27 08:40:52 compute-0 podman[159878]: 2026-01-27 08:40:52.257531088 +0000 UTC m=+0.049353896 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: ++ cat /run_command
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + CMD=neutron-ovn-metadata-agent
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + ARGS=
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + sudo kolla_copy_cacerts
Jan 27 08:40:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:52 compute-0 systemd[1]: Reloading.
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + [[ ! -n '' ]]
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + . kolla_extend_start
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: Running command: 'neutron-ovn-metadata-agent'
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + umask 0022
Jan 27 08:40:52 compute-0 ovn_metadata_agent[159871]: + exec neutron-ovn-metadata-agent
Jan 27 08:40:52 compute-0 systemd-rc-local-generator[159950]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:40:52 compute-0 systemd-sysv-generator[159954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:40:52 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 27 08:40:52 compute-0 sudo[159811]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:52 compute-0 ceph-mon[74357]: pgmap v502: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:53.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:53 compute-0 python3.9[160111]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 08:40:54 compute-0 ceph-mon[74357]: pgmap v503: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.185 159876 INFO neutron.common.config [-] Logging enabled!
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.185 159876 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.185 159876 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.186 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.186 159876 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.186 159876 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.186 159876 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.187 159876 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.188 159876 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.189 159876 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.190 159876 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.191 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.192 159876 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.193 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.194 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.195 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.196 159876 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.197 159876 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.198 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.199 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.200 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.201 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.202 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.203 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.204 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.205 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.206 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.207 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.208 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.209 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.210 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.211 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.212 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.213 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.214 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.215 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.216 159876 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.217 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.218 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.219 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.220 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.221 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.221 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.221 159876 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.221 159876 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.230 159876 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.231 159876 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.231 159876 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.231 159876 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.231 159876 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.246 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name fd496359-7f94-4196-96c9-9e7fb7c843a0 (UUID: fd496359-7f94-4196-96c9-9e7fb7c843a0) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.269 159876 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.269 159876 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.269 159876 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.269 159876 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.272 159876 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.278 159876 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.283 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'fd496359-7f94-4196-96c9-9e7fb7c843a0'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd1f70fc880>], external_ids={}, name=fd496359-7f94-4196-96c9-9e7fb7c843a0, nb_cfg_timestamp=1769503186678, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.284 159876 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd1f70ebf70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.285 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.285 159876 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.285 159876 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.285 159876 INFO oslo_service.service [-] Starting 1 workers
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.289 159876 DEBUG oslo_service.service [-] Started child 160136 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.293 159876 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpqwx_oiz6/privsep.sock']
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.295 160136 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-230111'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.331 160136 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.332 160136 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.333 160136 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.337 160136 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.346 160136 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 08:40:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.356 160136 INFO eventlet.wsgi.server [-] (160136) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 27 08:40:54 compute-0 sudo[160266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbogjuyhaoeyyjlssengzoruksuafyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503254.487343-1427-270922042672191/AnsiballZ_stat.py'
Jan 27 08:40:54 compute-0 sudo[160266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:54 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 27 08:40:55 compute-0 python3.9[160268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:55.020 159876 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:55.020 159876 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqwx_oiz6/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 08:40:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.838 160269 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.843 160269 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.847 160269 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:54.847 160269 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160269
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:55.024 160269 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb8e946-f0fc-476a-b5e1-32bbc88f798c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 08:40:55 compute-0 sudo[160266]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:55.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:55 compute-0 sudo[160397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnoxjlryoappouqqpbtawhtufjljjndr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503254.487343-1427-270922042672191/AnsiballZ_copy.py'
Jan 27 08:40:55 compute-0 sudo[160397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:40:55 compute-0 sudo[160400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:55 compute-0 sudo[160400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:55 compute-0 sudo[160400]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:55 compute-0 sudo[160425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:40:55 compute-0 sudo[160425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:55 compute-0 sudo[160425]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:55.620 160269 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:55.620 160269 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:40:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:55.621 160269 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:40:55 compute-0 sudo[160450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:55 compute-0 sudo[160450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:55 compute-0 sudo[160450]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:55 compute-0 python3.9[160399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503254.487343-1427-270922042672191/.source.yaml _original_basename=.3o6s6dnn follow=False checksum=589a4398ef9d0095a0cde663665a9d47ceaab674 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:40:55 compute-0 sudo[160397]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:55 compute-0 sudo[160475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:40:55 compute-0 sudo[160475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:56 compute-0 sshd-session[150764]: Connection closed by 192.168.122.30 port 36304
Jan 27 08:40:56 compute-0 sshd-session[150725]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:40:56 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 27 08:40:56 compute-0 systemd[1]: session-48.scope: Consumed 57.555s CPU time.
Jan 27 08:40:56 compute-0 systemd-logind[799]: Session 48 logged out. Waiting for processes to exit.
Jan 27 08:40:56 compute-0 systemd-logind[799]: Removed session 48.
Jan 27 08:40:56 compute-0 sudo[160475]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.196 160269 DEBUG oslo.privsep.daemon [-] privsep: reply[8a287d19-f383-4ef0-8c33-a02e2a5b798a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.198 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, column=external_ids, values=({'neutron:ovn-metadata-id': 'df97bb30-d186-554c-aa00-aa998c16e48f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:40:56 compute-0 ceph-mon[74357]: pgmap v504: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:40:56 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0812cbd0-d3f3-4c2a-8d2d-13a329655798 does not exist
Jan 27 08:40:56 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a7d12d7b-e9af-4a01-90fa-fbd3a636b6ca does not exist
Jan 27 08:40:56 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 2ebac33e-b58a-464a-879d-43fff1ec91bd does not exist
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:40:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:40:56 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.321 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.327 159876 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.327 159876 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.327 159876 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.327 159876 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.327 159876 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.328 159876 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.329 159876 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.330 159876 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.331 159876 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.332 159876 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.333 159876 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.334 159876 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.335 159876 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.336 159876 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.337 159876 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.338 159876 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.339 159876 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.340 159876 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.341 159876 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.342 159876 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.343 159876 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.344 159876 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 sudo[160555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.345 159876 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.346 159876 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.347 159876 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.348 159876 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.348 159876 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.348 159876 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.348 159876 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.348 159876 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.349 159876 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.349 159876 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.349 159876 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 sudo[160555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.349 159876 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.349 159876 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.350 159876 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 sudo[160555]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.351 159876 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.352 159876 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.353 159876 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.354 159876 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.355 159876 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.356 159876 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.357 159876 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.358 159876 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.359 159876 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.360 159876 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.360 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.360 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.360 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.360 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.360 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.361 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.362 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.363 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.364 159876 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:40:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:40:56.365 159876 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 08:40:56 compute-0 sudo[160580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:40:56 compute-0 sudo[160580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:56 compute-0 sudo[160580]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:56 compute-0 sudo[160605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:56 compute-0 sudo[160605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:56 compute-0 sudo[160605]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:56 compute-0 sudo[160630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:40:56 compute-0 sudo[160630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:56 compute-0 podman[160695]: 2026-01-27 08:40:56.871861523 +0000 UTC m=+0.045788117 container create 755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:40:56 compute-0 systemd[1]: Started libpod-conmon-755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b.scope.
Jan 27 08:40:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:40:56 compute-0 podman[160695]: 2026-01-27 08:40:56.85113892 +0000 UTC m=+0.025065534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:40:56 compute-0 podman[160695]: 2026-01-27 08:40:56.958694933 +0000 UTC m=+0.132621547 container init 755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:40:56 compute-0 podman[160695]: 2026-01-27 08:40:56.971249119 +0000 UTC m=+0.145175713 container start 755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:40:56 compute-0 podman[160695]: 2026-01-27 08:40:56.976248848 +0000 UTC m=+0.150175462 container attach 755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:40:56 compute-0 systemd[1]: libpod-755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b.scope: Deactivated successfully.
Jan 27 08:40:56 compute-0 inspiring_vaughan[160711]: 167 167
Jan 27 08:40:56 compute-0 conmon[160711]: conmon 755139874a299819ccf2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b.scope/container/memory.events
Jan 27 08:40:56 compute-0 podman[160695]: 2026-01-27 08:40:56.980715052 +0000 UTC m=+0.154641676 container died 755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e5d70a0a34ee328f50eea0e89c556307d848e32ac3e575c7bdfc38b1912534-merged.mount: Deactivated successfully.
Jan 27 08:40:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:57 compute-0 podman[160695]: 2026-01-27 08:40:57.033958173 +0000 UTC m=+0.207884757 container remove 755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:40:57 compute-0 systemd[1]: libpod-conmon-755139874a299819ccf282c37098fdd36a1524b45eee225843692f219452984b.scope: Deactivated successfully.
Jan 27 08:40:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:40:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:57.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:40:57 compute-0 podman[160736]: 2026-01-27 08:40:57.206654326 +0000 UTC m=+0.058034744 container create 71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:40:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:40:57 compute-0 systemd[1]: Started libpod-conmon-71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14.scope.
Jan 27 08:40:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:40:57 compute-0 podman[160736]: 2026-01-27 08:40:57.177456279 +0000 UTC m=+0.028836717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:40:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737f51a989cba19fc74f985145fb25e880b086163e156a5c810c3d6879949f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737f51a989cba19fc74f985145fb25e880b086163e156a5c810c3d6879949f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737f51a989cba19fc74f985145fb25e880b086163e156a5c810c3d6879949f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737f51a989cba19fc74f985145fb25e880b086163e156a5c810c3d6879949f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737f51a989cba19fc74f985145fb25e880b086163e156a5c810c3d6879949f14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:57 compute-0 podman[160736]: 2026-01-27 08:40:57.298382152 +0000 UTC m=+0.149762580 container init 71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_babbage, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:40:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:57.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:57 compute-0 podman[160736]: 2026-01-27 08:40:57.312627545 +0000 UTC m=+0.164007943 container start 71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:57 compute-0 podman[160736]: 2026-01-27 08:40:57.317176252 +0000 UTC m=+0.168556670 container attach 71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:40:57 compute-0 sudo[160758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:57 compute-0 sudo[160758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:57 compute-0 sudo[160758]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:57 compute-0 sudo[160783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:57 compute-0 sudo[160783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:57 compute-0 sudo[160783]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:58 compute-0 relaxed_babbage[160753]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:40:58 compute-0 relaxed_babbage[160753]: --> relative data size: 1.0
Jan 27 08:40:58 compute-0 relaxed_babbage[160753]: --> All data devices are unavailable
Jan 27 08:40:58 compute-0 systemd[1]: libpod-71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14.scope: Deactivated successfully.
Jan 27 08:40:58 compute-0 podman[160736]: 2026-01-27 08:40:58.192191718 +0000 UTC m=+1.043572126 container died 71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:40:58 compute-0 ceph-mon[74357]: pgmap v505: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-737f51a989cba19fc74f985145fb25e880b086163e156a5c810c3d6879949f14-merged.mount: Deactivated successfully.
Jan 27 08:40:58 compute-0 podman[160736]: 2026-01-27 08:40:58.551711776 +0000 UTC m=+1.403092184 container remove 71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:40:58 compute-0 systemd[1]: libpod-conmon-71e2b860e1f3a46ed1f854b1ae053c0192295c4dd3de7723d4b2d603ca489a14.scope: Deactivated successfully.
Jan 27 08:40:58 compute-0 sudo[160630]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:58 compute-0 sudo[160831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:58 compute-0 sudo[160831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:58 compute-0 sudo[160831]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:58 compute-0 sudo[160856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:40:58 compute-0 sudo[160856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:58 compute-0 sudo[160856]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:58 compute-0 sudo[160881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:40:58 compute-0 sudo[160881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:58 compute-0 sudo[160881]: pam_unix(sudo:session): session closed for user root
Jan 27 08:40:58 compute-0 sudo[160906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:40:58 compute-0 sudo[160906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:40:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:40:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:40:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:40:59.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:40:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:40:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:40:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:40:59.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.329273628 +0000 UTC m=+0.083573541 container create 7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.274553076 +0000 UTC m=+0.028853009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:40:59 compute-0 systemd[1]: Started libpod-conmon-7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad.scope.
Jan 27 08:40:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.498067063 +0000 UTC m=+0.252366996 container init 7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.505995123 +0000 UTC m=+0.260295036 container start 7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:40:59 compute-0 systemd[1]: libpod-7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad.scope: Deactivated successfully.
Jan 27 08:40:59 compute-0 charming_blackburn[160987]: 167 167
Jan 27 08:40:59 compute-0 conmon[160987]: conmon 7d432f5d320063de78f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad.scope/container/memory.events
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.515677411 +0000 UTC m=+0.269977354 container attach 7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.516349189 +0000 UTC m=+0.270649112 container died 7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:40:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-84d916f058db3bfc0b4d2b06347d73a2548eb6d806441acc66a50eced6c8ff94-merged.mount: Deactivated successfully.
Jan 27 08:40:59 compute-0 podman[160971]: 2026-01-27 08:40:59.569089807 +0000 UTC m=+0.323389720 container remove 7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_blackburn, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:40:59 compute-0 systemd[1]: libpod-conmon-7d432f5d320063de78f33725bf34dc001ad24928f3a972aae7bd26489fe57aad.scope: Deactivated successfully.
Jan 27 08:40:59 compute-0 podman[161011]: 2026-01-27 08:40:59.780326806 +0000 UTC m=+0.062001975 container create f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:59 compute-0 podman[161011]: 2026-01-27 08:40:59.750678007 +0000 UTC m=+0.032353196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:40:59 compute-0 systemd[1]: Started libpod-conmon-f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180.scope.
Jan 27 08:40:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fe3f83ee2bf5c7044ce12d218a29a47983653202e8f8b53ecbfc19a1d03890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fe3f83ee2bf5c7044ce12d218a29a47983653202e8f8b53ecbfc19a1d03890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fe3f83ee2bf5c7044ce12d218a29a47983653202e8f8b53ecbfc19a1d03890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fe3f83ee2bf5c7044ce12d218a29a47983653202e8f8b53ecbfc19a1d03890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:00 compute-0 podman[161011]: 2026-01-27 08:41:00.02070351 +0000 UTC m=+0.302378679 container init f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendeleev, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:41:00 compute-0 podman[161011]: 2026-01-27 08:41:00.028943318 +0000 UTC m=+0.310618477 container start f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:41:00 compute-0 podman[161011]: 2026-01-27 08:41:00.138334792 +0000 UTC m=+0.420009941 container attach f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:41:00 compute-0 ceph-mon[74357]: pgmap v506: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]: {
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:     "0": [
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:         {
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "devices": [
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "/dev/loop3"
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             ],
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "lv_name": "ceph_lv0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "lv_size": "7511998464",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "name": "ceph_lv0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "tags": {
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.cluster_name": "ceph",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.crush_device_class": "",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.encrypted": "0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.osd_id": "0",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.type": "block",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:                 "ceph.vdo": "0"
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             },
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "type": "block",
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:             "vg_name": "ceph_vg0"
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:         }
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]:     ]
Jan 27 08:41:00 compute-0 nostalgic_mendeleev[161028]: }
Jan 27 08:41:00 compute-0 systemd[1]: libpod-f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180.scope: Deactivated successfully.
Jan 27 08:41:00 compute-0 podman[161011]: 2026-01-27 08:41:00.849033127 +0000 UTC m=+1.130708296 container died f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:41:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Jan 27 08:41:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:01.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-96fe3f83ee2bf5c7044ce12d218a29a47983653202e8f8b53ecbfc19a1d03890-merged.mount: Deactivated successfully.
Jan 27 08:41:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:01.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:01 compute-0 podman[161011]: 2026-01-27 08:41:01.423976178 +0000 UTC m=+1.705651327 container remove f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendeleev, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:41:01 compute-0 systemd[1]: libpod-conmon-f5d844286f44df9a2fc36152575766a046df55933cba49c45ce5d94e1e983180.scope: Deactivated successfully.
Jan 27 08:41:01 compute-0 sudo[160906]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:01 compute-0 sudo[161052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:01 compute-0 sudo[161052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:01 compute-0 sudo[161052]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:01 compute-0 sudo[161079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:41:01 compute-0 sudo[161079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:01 compute-0 sshd-session[161060]: Accepted publickey for zuul from 192.168.122.30 port 33518 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:41:01 compute-0 sudo[161079]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:01 compute-0 systemd-logind[799]: New session 49 of user zuul.
Jan 27 08:41:01 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 27 08:41:01 compute-0 sshd-session[161060]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:41:01 compute-0 sudo[161105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:01 compute-0 sudo[161105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:01 compute-0 sudo[161105]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:01 compute-0 sudo[161132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:41:01 compute-0 sudo[161132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.119665408 +0000 UTC m=+0.082859361 container create 5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.060736699 +0000 UTC m=+0.023930662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:41:02 compute-0 systemd[1]: Started libpod-conmon-5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c.scope.
Jan 27 08:41:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:41:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.288449274 +0000 UTC m=+0.251643257 container init 5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.298787759 +0000 UTC m=+0.261981702 container start 5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:41:02 compute-0 romantic_lalande[161291]: 167 167
Jan 27 08:41:02 compute-0 systemd[1]: libpod-5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c.scope: Deactivated successfully.
Jan 27 08:41:02 compute-0 conmon[161291]: conmon 5bed89527fd24985e34a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c.scope/container/memory.events
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.318833033 +0000 UTC m=+0.282026996 container attach 5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.320414187 +0000 UTC m=+0.283608140 container died 5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f0c6dbc6cea4f92dfc170c0a76a71106786c9fb24bcf1453a2c3e467d9a3295-merged.mount: Deactivated successfully.
Jan 27 08:41:02 compute-0 podman[161250]: 2026-01-27 08:41:02.424461933 +0000 UTC m=+0.387655886 container remove 5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:41:02 compute-0 systemd[1]: libpod-conmon-5bed89527fd24985e34a71013c9e2770bfa427f169b28896b58219430ccbd89c.scope: Deactivated successfully.
Jan 27 08:41:02 compute-0 podman[161388]: 2026-01-27 08:41:02.652810955 +0000 UTC m=+0.084540328 container create a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:41:02 compute-0 podman[161388]: 2026-01-27 08:41:02.603995725 +0000 UTC m=+0.035725078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:41:02 compute-0 systemd[1]: Started libpod-conmon-a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53.scope.
Jan 27 08:41:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cf76872c98e31e331daa1b5fa120410cc42773d84ff22b750babd0e29fc0fc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:02 compute-0 python3.9[161382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cf76872c98e31e331daa1b5fa120410cc42773d84ff22b750babd0e29fc0fc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cf76872c98e31e331daa1b5fa120410cc42773d84ff22b750babd0e29fc0fc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cf76872c98e31e331daa1b5fa120410cc42773d84ff22b750babd0e29fc0fc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:02 compute-0 podman[161388]: 2026-01-27 08:41:02.800879797 +0000 UTC m=+0.232609160 container init a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:41:02 compute-0 ceph-mon[74357]: pgmap v507: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Jan 27 08:41:02 compute-0 podman[161388]: 2026-01-27 08:41:02.812027615 +0000 UTC m=+0.243756958 container start a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:41:02 compute-0 podman[161388]: 2026-01-27 08:41:02.827341579 +0000 UTC m=+0.259070962 container attach a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 27 08:41:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Jan 27 08:41:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:03.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:03.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]: {
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:         "osd_id": 0,
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:         "type": "bluestore"
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]:     }
Jan 27 08:41:03 compute-0 pensive_nightingale[161404]: }
Jan 27 08:41:03 compute-0 systemd[1]: libpod-a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53.scope: Deactivated successfully.
Jan 27 08:41:03 compute-0 podman[161388]: 2026-01-27 08:41:03.811349758 +0000 UTC m=+1.243079091 container died a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:41:03 compute-0 sudo[161579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgtjomuvcqyrfutifivhpdychydcldjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503263.3522542-62-25061592044663/AnsiballZ_command.py'
Jan 27 08:41:03 compute-0 sudo[161579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:04 compute-0 python3.9[161587]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:04 compute-0 sudo[161579]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:04 compute-0 ceph-mon[74357]: pgmap v508: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Jan 27 08:41:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cf76872c98e31e331daa1b5fa120410cc42773d84ff22b750babd0e29fc0fc6-merged.mount: Deactivated successfully.
Jan 27 08:41:04 compute-0 podman[161388]: 2026-01-27 08:41:04.776826925 +0000 UTC m=+2.208556268 container remove a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:41:04 compute-0 sudo[161132]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:41:04 compute-0 systemd[1]: libpod-conmon-a7a4ed42e7a2640948e1c457ff7b106cc8d2ba141f8f603aa42fccd78e8dca53.scope: Deactivated successfully.
Jan 27 08:41:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Jan 27 08:41:05 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:41:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:41:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:05 compute-0 sudo[161757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gscpgjkaxzdirngugsdrmeguhanjhawb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503264.4662817-95-126521273817390/AnsiballZ_systemd_service.py'
Jan 27 08:41:05 compute-0 sudo[161757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:05.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:05 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:41:05 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b7f94d03-0b4b-48d5-95e7-f2a8289f6acf does not exist
Jan 27 08:41:05 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8a433e9a-35d5-4fa7-ada1-87bc9261d57b does not exist
Jan 27 08:41:05 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 40e8a3bd-f10c-4fb2-9d3a-55f555e3d7f8 does not exist
Jan 27 08:41:05 compute-0 sudo[161760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:05 compute-0 sudo[161760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:05 compute-0 sudo[161760]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:05 compute-0 sudo[161785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:41:05 compute-0 sudo[161785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:05 compute-0 python3.9[161759]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:41:05 compute-0 sudo[161785]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:05 compute-0 systemd[1]: Reloading.
Jan 27 08:41:05 compute-0 systemd-sysv-generator[161836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:41:05 compute-0 systemd-rc-local-generator[161830]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:41:05 compute-0 sudo[161757]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:06 compute-0 ceph-mon[74357]: pgmap v509: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Jan 27 08:41:06 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:41:06 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:41:06 compute-0 python3.9[161995]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:41:06 compute-0 network[162012]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:41:06 compute-0 network[162013]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:41:06 compute-0 network[162014]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:41:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 27 08:41:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:07.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:07.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:08 compute-0 ceph-mon[74357]: pgmap v510: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 27 08:41:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 27 08:41:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:09.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:09.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:10 compute-0 ceph-mon[74357]: pgmap v511: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 27 08:41:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 27 08:41:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:11.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:11 compute-0 sudo[162277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejicgdiziyzbcqkvqlxzrjcyfydfxsey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503270.911906-152-73109744512004/AnsiballZ_systemd_service.py'
Jan 27 08:41:11 compute-0 sudo[162277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:11 compute-0 python3.9[162279]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:11 compute-0 sudo[162277]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:11 compute-0 sudo[162430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwgwddkpfllbiockexxfuufueupbbet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503271.6656806-152-44622414231801/AnsiballZ_systemd_service.py'
Jan 27 08:41:11 compute-0 sudo[162430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:12 compute-0 python3.9[162432]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:12 compute-0 sudo[162430]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:12 compute-0 sudo[162583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wykfpekyslxbzkcjovylmkojekxiclnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503272.4375217-152-70828408366258/AnsiballZ_systemd_service.py'
Jan 27 08:41:12 compute-0 sudo[162583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:12 compute-0 ceph-mon[74357]: pgmap v512: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 27 08:41:12 compute-0 python3.9[162585]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:13 compute-0 sudo[162583]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 0 B/s wr, 100 op/s
Jan 27 08:41:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:13.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:13 compute-0 sudo[162737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsgbniclegnjnojyecuberurcxkgvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503273.1505308-152-125777078743861/AnsiballZ_systemd_service.py'
Jan 27 08:41:13 compute-0 sudo[162737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:13 compute-0 python3.9[162739]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:13 compute-0 sudo[162737]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:14 compute-0 podman[162817]: 2026-01-27 08:41:14.277813342 +0000 UTC m=+0.090415529 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:41:14 compute-0 sudo[162916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzsrkcbefvkypmwgvxkwvblkgzdqmwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503274.025015-152-245001346411519/AnsiballZ_systemd_service.py'
Jan 27 08:41:14 compute-0 sudo[162916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:14 compute-0 python3.9[162918]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:14 compute-0 sudo[162916]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:14 compute-0 ceph-mon[74357]: pgmap v513: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 0 B/s wr, 100 op/s
Jan 27 08:41:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:41:14
Jan 27 08:41:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:41:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:41:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms']
Jan 27 08:41:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 0 B/s wr, 100 op/s
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:15 compute-0 sudo[163070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hudgobnlubngqhunmucdbrkpvgjpggjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503274.84352-152-8757536747463/AnsiballZ_systemd_service.py'
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:41:15 compute-0 sudo[163070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:41:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:41:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:15.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:15.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:15 compute-0 python3.9[163072]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:15 compute-0 sudo[163070]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:15 compute-0 sudo[163223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpontrvqmicddudwelbqobdbmzhwhul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503275.590687-152-58852123648561/AnsiballZ_systemd_service.py'
Jan 27 08:41:15 compute-0 sudo[163223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:16 compute-0 ceph-mon[74357]: pgmap v514: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 0 B/s wr, 100 op/s
Jan 27 08:41:16 compute-0 python3.9[163225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:41:16 compute-0 sudo[163223]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 27 08:41:17 compute-0 sudo[163377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekzzsqclezdchfpjwmcrabblpwcfvei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503276.7157433-308-115377706019578/AnsiballZ_file.py'
Jan 27 08:41:17 compute-0 sudo[163377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:17 compute-0 python3.9[163379]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:17 compute-0 sudo[163377]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:17 compute-0 sudo[163380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:17 compute-0 sudo[163380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:17 compute-0 sudo[163380]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:17 compute-0 sudo[163429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:17 compute-0 sudo[163429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:17 compute-0 sudo[163429]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:17 compute-0 sudo[163579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxgdkfzzixzbqanymchmbbuptryouii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503277.6171322-308-176901456264453/AnsiballZ_file.py'
Jan 27 08:41:17 compute-0 sudo[163579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:18 compute-0 python3.9[163581]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:18 compute-0 sudo[163579]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:18 compute-0 ceph-mon[74357]: pgmap v515: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 27 08:41:18 compute-0 sudo[163731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxikmzmcubfkspcfsqispfhrlrfnihvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503278.2614477-308-227857515061882/AnsiballZ_file.py'
Jan 27 08:41:18 compute-0 sudo[163731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:18 compute-0 python3.9[163733]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:18 compute-0 sudo[163731]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:19 compute-0 sudo[163884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhpuzxhpwieqthidduderojigzeghcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503278.91868-308-106935878405167/AnsiballZ_file.py'
Jan 27 08:41:19 compute-0 sudo[163884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:19.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:19 compute-0 python3.9[163886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:19 compute-0 sudo[163884]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:19 compute-0 sudo[164036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrvctogyhvqmmummpphrxypsjfratobu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503279.5977278-308-54788041799905/AnsiballZ_file.py'
Jan 27 08:41:19 compute-0 sudo[164036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:20 compute-0 python3.9[164038]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:20 compute-0 sudo[164036]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:20 compute-0 sudo[164188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjxuzstjghwbghuhddabzjzqrzfafwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503280.1982784-308-226032528313352/AnsiballZ_file.py'
Jan 27 08:41:20 compute-0 sudo[164188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:20 compute-0 ceph-mon[74357]: pgmap v516: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:20 compute-0 python3.9[164190]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:20 compute-0 sudo[164188]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:21.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:21 compute-0 sudo[164341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ospgcifxpdbyatiafhfjvsaxcmwwfvur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503280.9174135-308-38609550351381/AnsiballZ_file.py'
Jan 27 08:41:21 compute-0 sudo[164341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:21 compute-0 python3.9[164343]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:21 compute-0 sudo[164341]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:22 compute-0 sudo[164493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbgrklzccuoslynpljybzftozpwpiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503281.9810536-458-35379376468901/AnsiballZ_file.py'
Jan 27 08:41:22 compute-0 sudo[164493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:22 compute-0 podman[164495]: 2026-01-27 08:41:22.387022739 +0000 UTC m=+0.080628640 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:41:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:22 compute-0 python3.9[164496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:22 compute-0 sudo[164493]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:22 compute-0 ceph-mon[74357]: pgmap v517: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:23 compute-0 sudo[164665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvrtnldmmyczzsywlgruwqkfoismdux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503282.6848357-458-67359521849249/AnsiballZ_file.py'
Jan 27 08:41:23 compute-0 sudo[164665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:23 compute-0 python3.9[164668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:23 compute-0 sudo[164665]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:23 compute-0 sudo[164818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcrqvavxytmnekwvurawcjjjlounnxfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503283.4004374-458-230897679108653/AnsiballZ_file.py'
Jan 27 08:41:23 compute-0 sudo[164818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:23 compute-0 python3.9[164820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:23 compute-0 sudo[164818]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:41:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:41:24 compute-0 sudo[164970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eguytwyefkoqoynezrqtkazahbtaoomq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503284.0444305-458-213622623622943/AnsiballZ_file.py'
Jan 27 08:41:24 compute-0 sudo[164970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:24 compute-0 python3.9[164972]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:24 compute-0 sudo[164970]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:24 compute-0 ceph-mon[74357]: pgmap v518: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:25 compute-0 sudo[165122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwenimxaxsnepcqaidodhwbgqelokznv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503284.7014284-458-234591256883104/AnsiballZ_file.py'
Jan 27 08:41:25 compute-0 sudo[165122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:25.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:25 compute-0 python3.9[165124]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:25 compute-0 sudo[165122]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:25 compute-0 sudo[165275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmizrtbjyatevfgypawofymmzxshbqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503285.4097705-458-169639857680869/AnsiballZ_file.py'
Jan 27 08:41:25 compute-0 sudo[165275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:26 compute-0 python3.9[165277]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:26 compute-0 sudo[165275]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:26 compute-0 sudo[165427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpvlxygkruglplwgrsrqxykjxauxmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503286.2059383-458-106036212665756/AnsiballZ_file.py'
Jan 27 08:41:26 compute-0 sudo[165427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:26 compute-0 ceph-mon[74357]: pgmap v519: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:26 compute-0 python3.9[165429]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:41:26 compute-0 sudo[165427]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:27.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:27.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:27 compute-0 sudo[165580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cscithkyyurkhxammhuplfmrzyozzfhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503287.1088338-611-237048284787673/AnsiballZ_command.py'
Jan 27 08:41:27 compute-0 sudo[165580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:27 compute-0 python3.9[165582]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:27 compute-0 sudo[165580]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:28 compute-0 python3.9[165734]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:41:28 compute-0 ceph-mon[74357]: pgmap v520: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:29 compute-0 sudo[165885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbyljmqxlniyazqfanyvqbrfmlkxghzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503288.969263-665-90287330650160/AnsiballZ_systemd_service.py'
Jan 27 08:41:29 compute-0 sudo[165885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:29.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:29 compute-0 python3.9[165887]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:41:29 compute-0 systemd[1]: Reloading.
Jan 27 08:41:29 compute-0 systemd-sysv-generator[165915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:41:29 compute-0 systemd-rc-local-generator[165909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:41:29 compute-0 sudo[165885]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:30 compute-0 sudo[166072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhaqgtofajtrebwpqofxmzprsgizbivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503290.1159945-689-275044148298269/AnsiballZ_command.py'
Jan 27 08:41:30 compute-0 sudo[166072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:30 compute-0 python3.9[166074]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:30 compute-0 sudo[166072]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:30 compute-0 ceph-mon[74357]: pgmap v521: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:31 compute-0 sudo[166225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzckumihgnvypiqrlwsphemctegvtxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503290.7535217-689-3934360679172/AnsiballZ_command.py'
Jan 27 08:41:31 compute-0 sudo[166225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:31 compute-0 python3.9[166228]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:31 compute-0 sudo[166225]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:31.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:31 compute-0 sudo[166379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exihpcnfgxypsowjqplgbscaleyjieqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503291.3756075-689-56511345177977/AnsiballZ_command.py'
Jan 27 08:41:31 compute-0 sudo[166379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:31 compute-0 python3.9[166381]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:31 compute-0 sudo[166379]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:32 compute-0 ceph-mon[74357]: pgmap v522: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:32 compute-0 sudo[166532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjoeuqsfpbdcdqumrbegfnjicenddkxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503292.0904586-689-3957816014453/AnsiballZ_command.py'
Jan 27 08:41:32 compute-0 sudo[166532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:32 compute-0 python3.9[166534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:32 compute-0 sudo[166532]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:33 compute-0 sudo[166685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaunjsokqqfkajgvabuvjffbakguohcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503292.7215374-689-67762327544149/AnsiballZ_command.py'
Jan 27 08:41:33 compute-0 sudo[166685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:33 compute-0 python3.9[166687]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:33 compute-0 sudo[166685]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:41:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:41:33 compute-0 sudo[166839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqlhqghtdgrgsgmhurzervhkdltscbaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503293.3750417-689-35955165946329/AnsiballZ_command.py'
Jan 27 08:41:33 compute-0 sudo[166839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:33 compute-0 python3.9[166841]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:33 compute-0 sudo[166839]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:34 compute-0 ceph-mon[74357]: pgmap v523: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:34 compute-0 sudo[166992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylelppyhnjwzdazomohkoiqynmgwzpqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503294.0061903-689-138149278397464/AnsiballZ_command.py'
Jan 27 08:41:34 compute-0 sudo[166992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:34 compute-0 python3.9[166994]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:41:34 compute-0 sudo[166992]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:35 compute-0 sudo[167146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdldflnuqriuaqddiuxzbnhcsepxavxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503295.2477217-851-203681525456159/AnsiballZ_getent.py'
Jan 27 08:41:35 compute-0 sudo[167146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:35 compute-0 python3.9[167148]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 27 08:41:35 compute-0 sudo[167146]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:36 compute-0 ceph-mon[74357]: pgmap v524: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:36 compute-0 sudo[167299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmecotvykjyftgzphviswcjdqphjgtcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503296.1762576-875-67492640488056/AnsiballZ_group.py'
Jan 27 08:41:36 compute-0 sudo[167299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:36 compute-0 python3.9[167301]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:41:36 compute-0 groupadd[167302]: group added to /etc/group: name=libvirt, GID=42473
Jan 27 08:41:36 compute-0 groupadd[167302]: group added to /etc/gshadow: name=libvirt
Jan 27 08:41:36 compute-0 groupadd[167302]: new group: name=libvirt, GID=42473
Jan 27 08:41:36 compute-0 sudo[167299]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:37 compute-0 sudo[167432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:37 compute-0 sudo[167432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:37 compute-0 sudo[167432]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:37 compute-0 sudo[167482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufucxvzraxkdmsshoutnvrnurnifufzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503297.15783-899-100809861144887/AnsiballZ_user.py'
Jan 27 08:41:37 compute-0 sudo[167482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:37 compute-0 sudo[167486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:37 compute-0 sudo[167486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:37 compute-0 sudo[167486]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:37 compute-0 python3.9[167485]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:41:37 compute-0 useradd[167512]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 08:41:37 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:41:37 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:41:38 compute-0 sudo[167482]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:38 compute-0 ceph-mon[74357]: pgmap v525: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:38 compute-0 sudo[167669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyssdzdpxfynfiocvzrunmhjupwzkbty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503298.508579-932-250147957327151/AnsiballZ_setup.py'
Jan 27 08:41:38 compute-0 sudo[167669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:41:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:39.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:41:39 compute-0 python3.9[167671]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:41:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:39 compute-0 sudo[167669]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:39 compute-0 sudo[167754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfvufiyfmzjearqsjakwsntgzdgihkpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503298.508579-932-250147957327151/AnsiballZ_dnf.py'
Jan 27 08:41:39 compute-0 sudo[167754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:41:40 compute-0 python3.9[167756]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:41:41 compute-0 ceph-mon[74357]: pgmap v526: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:41.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:42 compute-0 ceph-mon[74357]: pgmap v527: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:41:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:43.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:41:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:44 compute-0 ceph-mon[74357]: pgmap v528: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:45.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:45 compute-0 podman[167767]: 2026-01-27 08:41:45.286910937 +0000 UTC m=+0.104067549 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 08:41:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:45.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:46 compute-0 ceph-mon[74357]: pgmap v529: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:47.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:48 compute-0 ceph-mon[74357]: pgmap v530: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:49.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:50 compute-0 ceph-mon[74357]: pgmap v531: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:41:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:51.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:41:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:51.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:52 compute-0 ceph-mon[74357]: pgmap v532: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:53.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:53 compute-0 podman[167800]: 2026-01-27 08:41:53.27485808 +0000 UTC m=+0.073095134 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:41:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:53.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:41:54.223 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:41:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:41:54.224 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:41:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:41:54.224 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:41:54 compute-0 ceph-mon[74357]: pgmap v533: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:55.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:56 compute-0 ceph-mon[74357]: pgmap v534: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:57.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:41:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:57.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:41:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:41:57 compute-0 sudo[167822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:57 compute-0 sudo[167822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:57 compute-0 sudo[167822]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:57 compute-0 sudo[167847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:41:57 compute-0 sudo[167847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:41:57 compute-0 sudo[167847]: pam_unix(sudo:session): session closed for user root
Jan 27 08:41:58 compute-0 ceph-mon[74357]: pgmap v535: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:41:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:41:59.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:41:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:41:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:41:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:41:59.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:00 compute-0 ceph-mon[74357]: pgmap v536: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:01.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:01.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:02 compute-0 ceph-mon[74357]: pgmap v537: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:03.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:03.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:04 compute-0 ceph-mon[74357]: pgmap v538: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:05.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:06 compute-0 sudo[168048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:06 compute-0 sudo[168048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:06 compute-0 sudo[168048]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:06 compute-0 sudo[168073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:42:06 compute-0 sudo[168073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:06 compute-0 sudo[168073]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:06 compute-0 sudo[168098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:06 compute-0 sudo[168098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:06 compute-0 sudo[168098]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:06 compute-0 sudo[168123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:42:06 compute-0 sudo[168123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:06 compute-0 ceph-mon[74357]: pgmap v539: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:06 compute-0 sudo[168123]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:42:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:42:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:42:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:42:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:42:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:42:06 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e7e0cd60-0760-4511-932a-bda987361c49 does not exist
Jan 27 08:42:06 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7936edae-944b-4d74-bf2d-5a5c20454014 does not exist
Jan 27 08:42:06 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f4621658-7afb-4577-ab4e-0ff992d0f11a does not exist
Jan 27 08:42:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:42:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:42:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:42:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:42:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:42:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:42:06 compute-0 sudo[168179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:06 compute-0 sudo[168179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:06 compute-0 sudo[168179]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:06 compute-0 auditd[705]: Audit daemon rotating log files
Jan 27 08:42:07 compute-0 sudo[168204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:42:07 compute-0 sudo[168204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:07 compute-0 sudo[168204]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:07 compute-0 sudo[168229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:07 compute-0 sudo[168229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:07 compute-0 sudo[168229]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:07 compute-0 sudo[168255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:42:07 compute-0 sudo[168255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:07.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.54194506 +0000 UTC m=+0.052825237 container create b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:42:07 compute-0 systemd[1]: Started libpod-conmon-b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051.scope.
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.521738216 +0000 UTC m=+0.032618423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:42:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.632065999 +0000 UTC m=+0.142946216 container init b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.639596719 +0000 UTC m=+0.150476906 container start b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.64285596 +0000 UTC m=+0.153736167 container attach b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:42:07 compute-0 pedantic_cray[168336]: 167 167
Jan 27 08:42:07 compute-0 systemd[1]: libpod-b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051.scope: Deactivated successfully.
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.64569282 +0000 UTC m=+0.156573007 container died b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:42:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b870acaa85031a1ca71897248e84d7e84d693aed29590064c03b395200c7e60a-merged.mount: Deactivated successfully.
Jan 27 08:42:07 compute-0 podman[168320]: 2026-01-27 08:42:07.686097448 +0000 UTC m=+0.196977635 container remove b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:07 compute-0 systemd[1]: libpod-conmon-b392abbdc40ec7b008194a83cb5396d575069db2fcf7f1aecae5f4b6abcac051.scope: Deactivated successfully.
Jan 27 08:42:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:42:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:42:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:42:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:42:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:42:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:42:07 compute-0 podman[168362]: 2026-01-27 08:42:07.85826298 +0000 UTC m=+0.045007319 container create 4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:42:07 compute-0 systemd[1]: Started libpod-conmon-4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99.scope.
Jan 27 08:42:07 compute-0 podman[168362]: 2026-01-27 08:42:07.835319789 +0000 UTC m=+0.022064128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:42:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d6a3420454c5df9485929514b279a4ae67eacc6ae26be2aa1f2932b69c0632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d6a3420454c5df9485929514b279a4ae67eacc6ae26be2aa1f2932b69c0632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d6a3420454c5df9485929514b279a4ae67eacc6ae26be2aa1f2932b69c0632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d6a3420454c5df9485929514b279a4ae67eacc6ae26be2aa1f2932b69c0632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d6a3420454c5df9485929514b279a4ae67eacc6ae26be2aa1f2932b69c0632/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:07 compute-0 podman[168362]: 2026-01-27 08:42:07.974119407 +0000 UTC m=+0.160863746 container init 4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:42:07 compute-0 podman[168362]: 2026-01-27 08:42:07.985644929 +0000 UTC m=+0.172389248 container start 4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jang, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:42:07 compute-0 podman[168362]: 2026-01-27 08:42:07.989979051 +0000 UTC m=+0.176723380 container attach 4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:42:08 compute-0 frosty_jang[168378]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:42:08 compute-0 frosty_jang[168378]: --> relative data size: 1.0
Jan 27 08:42:08 compute-0 frosty_jang[168378]: --> All data devices are unavailable
Jan 27 08:42:08 compute-0 systemd[1]: libpod-4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99.scope: Deactivated successfully.
Jan 27 08:42:08 compute-0 podman[168362]: 2026-01-27 08:42:08.828917204 +0000 UTC m=+1.015661553 container died 4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d6a3420454c5df9485929514b279a4ae67eacc6ae26be2aa1f2932b69c0632-merged.mount: Deactivated successfully.
Jan 27 08:42:08 compute-0 podman[168362]: 2026-01-27 08:42:08.883760977 +0000 UTC m=+1.070505306 container remove 4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jang, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:08 compute-0 ceph-mon[74357]: pgmap v540: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:08 compute-0 systemd[1]: libpod-conmon-4b1505708da08a2f9062a5cffb9b5009dec53aaa118965e633c5711de6a67a99.scope: Deactivated successfully.
Jan 27 08:42:08 compute-0 sudo[168255]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:08 compute-0 sudo[168407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:08 compute-0 sudo[168407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:08 compute-0 sudo[168407]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:09 compute-0 sudo[168432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:42:09 compute-0 sudo[168432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:09 compute-0 sudo[168432]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:09 compute-0 sudo[168458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:09 compute-0 sudo[168458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:09 compute-0 sudo[168458]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:09 compute-0 sudo[168483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:42:09 compute-0 sudo[168483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:09.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:09 compute-0 podman[168549]: 2026-01-27 08:42:09.508513266 +0000 UTC m=+0.064922225 container create 349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 08:42:09 compute-0 systemd[1]: Started libpod-conmon-349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c.scope.
Jan 27 08:42:09 compute-0 podman[168549]: 2026-01-27 08:42:09.481590753 +0000 UTC m=+0.037999762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:42:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:42:09 compute-0 podman[168549]: 2026-01-27 08:42:09.589987543 +0000 UTC m=+0.146396462 container init 349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:42:09 compute-0 podman[168549]: 2026-01-27 08:42:09.60136033 +0000 UTC m=+0.157769259 container start 349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:42:09 compute-0 podman[168549]: 2026-01-27 08:42:09.605058213 +0000 UTC m=+0.161467132 container attach 349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:42:09 compute-0 clever_poitras[168566]: 167 167
Jan 27 08:42:09 compute-0 systemd[1]: libpod-349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c.scope: Deactivated successfully.
Jan 27 08:42:09 compute-0 podman[168573]: 2026-01-27 08:42:09.652633003 +0000 UTC m=+0.030525824 container died 349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b2e299743811cba09937ea9a396bf06407752cfe5c663b2599c6d0af950bd0c-merged.mount: Deactivated successfully.
Jan 27 08:42:09 compute-0 podman[168573]: 2026-01-27 08:42:09.702667881 +0000 UTC m=+0.080560682 container remove 349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:42:09 compute-0 systemd[1]: libpod-conmon-349792c74489eebde006e5edf3c1f73dfc7cd1118adfe0282fb1cdc28c5da01c.scope: Deactivated successfully.
Jan 27 08:42:09 compute-0 podman[168594]: 2026-01-27 08:42:09.894589155 +0000 UTC m=+0.056124109 container create 3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:42:09 compute-0 systemd[1]: Started libpod-conmon-3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a.scope.
Jan 27 08:42:09 compute-0 podman[168594]: 2026-01-27 08:42:09.864920785 +0000 UTC m=+0.026455759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:42:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbe4d731fea32f7b55d63542c61f0d4bff788bf8aa491e70a0b8ffd610c930f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbe4d731fea32f7b55d63542c61f0d4bff788bf8aa491e70a0b8ffd610c930f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbe4d731fea32f7b55d63542c61f0d4bff788bf8aa491e70a0b8ffd610c930f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbe4d731fea32f7b55d63542c61f0d4bff788bf8aa491e70a0b8ffd610c930f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:09 compute-0 podman[168594]: 2026-01-27 08:42:09.99642107 +0000 UTC m=+0.157956034 container init 3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:42:10 compute-0 podman[168594]: 2026-01-27 08:42:10.008867498 +0000 UTC m=+0.170402432 container start 3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:42:10 compute-0 podman[168594]: 2026-01-27 08:42:10.014832685 +0000 UTC m=+0.176367639 container attach 3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:42:10 compute-0 sweet_euler[168610]: {
Jan 27 08:42:10 compute-0 sweet_euler[168610]:     "0": [
Jan 27 08:42:10 compute-0 sweet_euler[168610]:         {
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "devices": [
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "/dev/loop3"
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             ],
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "lv_name": "ceph_lv0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "lv_size": "7511998464",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "name": "ceph_lv0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "tags": {
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.cluster_name": "ceph",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.crush_device_class": "",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.encrypted": "0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.osd_id": "0",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.type": "block",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:                 "ceph.vdo": "0"
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             },
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "type": "block",
Jan 27 08:42:10 compute-0 sweet_euler[168610]:             "vg_name": "ceph_vg0"
Jan 27 08:42:10 compute-0 sweet_euler[168610]:         }
Jan 27 08:42:10 compute-0 sweet_euler[168610]:     ]
Jan 27 08:42:10 compute-0 sweet_euler[168610]: }
Jan 27 08:42:10 compute-0 systemd[1]: libpod-3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a.scope: Deactivated successfully.
Jan 27 08:42:10 compute-0 podman[168594]: 2026-01-27 08:42:10.786239332 +0000 UTC m=+0.947774266 container died 3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfbe4d731fea32f7b55d63542c61f0d4bff788bf8aa491e70a0b8ffd610c930f-merged.mount: Deactivated successfully.
Jan 27 08:42:10 compute-0 podman[168594]: 2026-01-27 08:42:10.851353681 +0000 UTC m=+1.012888615 container remove 3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:42:10 compute-0 systemd[1]: libpod-conmon-3f3dff7e73db0c5efb5b63774bcd645265b2dbd8554a040c490f3df8923d4e4a.scope: Deactivated successfully.
Jan 27 08:42:10 compute-0 sudo[168483]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:10 compute-0 ceph-mon[74357]: pgmap v541: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:10 compute-0 sudo[168634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:10 compute-0 sudo[168634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:10 compute-0 sudo[168634]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:11 compute-0 sudo[168659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:42:11 compute-0 sudo[168659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:11 compute-0 sudo[168659]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:11 compute-0 sudo[168685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:11 compute-0 sudo[168685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:11 compute-0 sudo[168685]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:11 compute-0 sudo[168711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:42:11 compute-0 sudo[168711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:11.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.533209936 +0000 UTC m=+0.044260138 container create e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:11 compute-0 systemd[1]: Started libpod-conmon-e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45.scope.
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.511658084 +0000 UTC m=+0.022708296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:42:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.630054232 +0000 UTC m=+0.141104444 container init e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.639192787 +0000 UTC m=+0.150242989 container start e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_clarke, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.642594163 +0000 UTC m=+0.153644345 container attach e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_clarke, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:42:11 compute-0 thirsty_clarke[168792]: 167 167
Jan 27 08:42:11 compute-0 systemd[1]: libpod-e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45.scope: Deactivated successfully.
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.645297368 +0000 UTC m=+0.156347550 container died e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-302ec2d5fc038df1f31901794f9ceb3e3ddc1c9374ee69481c587a20e05c5e00-merged.mount: Deactivated successfully.
Jan 27 08:42:11 compute-0 podman[168777]: 2026-01-27 08:42:11.682800526 +0000 UTC m=+0.193850708 container remove e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_clarke, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:42:11 compute-0 systemd[1]: libpod-conmon-e77616c43e115bdb99d84986abe477445084c46fd9ad01bf1ec630bff3922b45.scope: Deactivated successfully.
Jan 27 08:42:11 compute-0 podman[168817]: 2026-01-27 08:42:11.875153392 +0000 UTC m=+0.053736534 container create 9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:42:11 compute-0 systemd[1]: Started libpod-conmon-9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a.scope.
Jan 27 08:42:11 compute-0 podman[168817]: 2026-01-27 08:42:11.855813451 +0000 UTC m=+0.034396603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:42:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c1d6b902286752e0f424893f0607ed3f695a71cd49e67bcd20107d6a11b5a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c1d6b902286752e0f424893f0607ed3f695a71cd49e67bcd20107d6a11b5a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c1d6b902286752e0f424893f0607ed3f695a71cd49e67bcd20107d6a11b5a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c1d6b902286752e0f424893f0607ed3f695a71cd49e67bcd20107d6a11b5a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:11 compute-0 podman[168817]: 2026-01-27 08:42:11.979472387 +0000 UTC m=+0.158055569 container init 9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:42:11 compute-0 podman[168817]: 2026-01-27 08:42:11.991812931 +0000 UTC m=+0.170396063 container start 9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:42:11 compute-0 podman[168817]: 2026-01-27 08:42:11.997641074 +0000 UTC m=+0.176224206 container attach 9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:42:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]: {
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:         "osd_id": 0,
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:         "type": "bluestore"
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]:     }
Jan 27 08:42:12 compute-0 relaxed_bassi[168833]: }
Jan 27 08:42:12 compute-0 systemd[1]: libpod-9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a.scope: Deactivated successfully.
Jan 27 08:42:12 compute-0 podman[168817]: 2026-01-27 08:42:12.863214812 +0000 UTC m=+1.041797964 container died 9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-56c1d6b902286752e0f424893f0607ed3f695a71cd49e67bcd20107d6a11b5a3-merged.mount: Deactivated successfully.
Jan 27 08:42:12 compute-0 podman[168817]: 2026-01-27 08:42:12.929850375 +0000 UTC m=+1.108433507 container remove 9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:12 compute-0 ceph-mon[74357]: pgmap v542: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:12 compute-0 systemd[1]: libpod-conmon-9cbaf348a82f03de61de631f27c843050e5d5eb3ed6ac933e39be190ed438f5a.scope: Deactivated successfully.
Jan 27 08:42:12 compute-0 sudo[168711]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:42:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:42:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:42:12 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:42:12 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8e1939a2-6898-4cbf-a17d-d22aeeccd0b6 does not exist
Jan 27 08:42:12 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 63eb84e0-7b94-4723-9883-046f4dd57279 does not exist
Jan 27 08:42:12 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 20945f03-6d14-49c6-a39e-b6bf42897c44 does not exist
Jan 27 08:42:13 compute-0 sudo[168867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:13 compute-0 sudo[168867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:13 compute-0 sudo[168867]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:13 compute-0 sudo[168893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:42:13 compute-0 sudo[168893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:13 compute-0 sudo[168893]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:13.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:42:13 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:42:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:42:14
Jan 27 08:42:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:42:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:42:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'images', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data']
Jan 27 08:42:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:42:15 compute-0 ceph-mon[74357]: pgmap v543: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:42:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:42:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:15.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:16 compute-0 ceph-mon[74357]: pgmap v544: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:16 compute-0 podman[168919]: 2026-01-27 08:42:16.325133715 +0000 UTC m=+0.126410573 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 08:42:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:17.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:17 compute-0 sudo[168947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:17 compute-0 sudo[168947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:17 compute-0 sudo[168947]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:18 compute-0 sudo[168972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:18 compute-0 sudo[168972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:18 compute-0 sudo[168972]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:18 compute-0 ceph-mon[74357]: pgmap v545: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:19.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:19.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:19 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:42:19 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:42:20 compute-0 ceph-mon[74357]: pgmap v546: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:21.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:21.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:22 compute-0 ceph-mon[74357]: pgmap v547: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:23.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:42:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:42:24 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 27 08:42:24 compute-0 podman[169009]: 2026-01-27 08:42:24.263852882 +0000 UTC m=+0.064024780 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:42:24 compute-0 ceph-mon[74357]: pgmap v548: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:25.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:26 compute-0 ceph-mon[74357]: pgmap v549: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:27.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:28 compute-0 ceph-mon[74357]: pgmap v550: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:29.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:29 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:42:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:42:30 compute-0 ceph-mon[74357]: pgmap v551: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:31.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:32 compute-0 ceph-mon[74357]: pgmap v552: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:34 compute-0 ceph-mon[74357]: pgmap v553: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:37 compute-0 ceph-mon[74357]: pgmap v554: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:37.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:38 compute-0 sudo[169044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:38 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 27 08:42:38 compute-0 sudo[169044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:38 compute-0 sudo[169044]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:38 compute-0 sudo[169069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:38 compute-0 sudo[169069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:38 compute-0 sudo[169069]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:38 compute-0 ceph-mon[74357]: pgmap v555: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:39.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:40 compute-0 ceph-mon[74357]: pgmap v556: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:41.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:42 compute-0 ceph-mon[74357]: pgmap v557: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:43.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:43.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:44 compute-0 ceph-mon[74357]: pgmap v558: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:45.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:45.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:46 compute-0 ceph-mon[74357]: pgmap v559: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:47 compute-0 podman[172100]: 2026-01-27 08:42:47.303405433 +0000 UTC m=+0.101681672 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:42:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:48 compute-0 ceph-mon[74357]: pgmap v560: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:49.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:51 compute-0 ceph-mon[74357]: pgmap v561: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:53 compute-0 ceph-mon[74357]: pgmap v562: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:53.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:42:54.225 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:42:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:42:54.225 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:42:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:42:54.226 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:42:55 compute-0 ceph-mon[74357]: pgmap v563: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:55 compute-0 podman[176854]: 2026-01-27 08:42:55.233741008 +0000 UTC m=+0.054774794 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:42:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:42:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:55.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:42:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:42:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:42:57 compute-0 ceph-mon[74357]: pgmap v564: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:57.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:42:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:58 compute-0 sudo[178582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:58 compute-0 sudo[178582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:58 compute-0 sudo[178582]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:58 compute-0 sudo[178651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:42:58 compute-0 sudo[178651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:42:58 compute-0 sudo[178651]: pam_unix(sudo:session): session closed for user root
Jan 27 08:42:59 compute-0 ceph-mon[74357]: pgmap v565: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:42:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:42:59.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:42:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:42:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:42:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:42:59.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:01 compute-0 ceph-mon[74357]: pgmap v566: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:01.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:01.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:03 compute-0 ceph-mon[74357]: pgmap v567: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:03.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:03.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:05 compute-0 ceph-mon[74357]: pgmap v568: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:05.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:05.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:06 compute-0 ceph-mon[74357]: pgmap v569: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:07.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:08 compute-0 ceph-mon[74357]: pgmap v570: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:09.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:09.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:10 compute-0 ceph-mon[74357]: pgmap v571: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:11.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:12 compute-0 ceph-mon[74357]: pgmap v572: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:43:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:13.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:43:13 compute-0 sudo[186075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:13 compute-0 sudo[186075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:13 compute-0 sudo[186075]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:13 compute-0 sudo[186100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:43:13 compute-0 sudo[186100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:13 compute-0 sudo[186100]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:13 compute-0 sudo[186125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:13 compute-0 sudo[186125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:13 compute-0 sudo[186125]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:13 compute-0 sudo[186151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:43:13 compute-0 sudo[186151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:14 compute-0 podman[186247]: 2026-01-27 08:43:14.238275726 +0000 UTC m=+0.063038914 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:43:14 compute-0 podman[186247]: 2026-01-27 08:43:14.352927755 +0000 UTC m=+0.177691004 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:43:14 compute-0 podman[186386]: 2026-01-27 08:43:14.878025012 +0000 UTC m=+0.068264478 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:43:14 compute-0 podman[186386]: 2026-01-27 08:43:14.888318517 +0000 UTC m=+0.078557973 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:43:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:43:14
Jan 27 08:43:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:43:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:43:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'vms', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 27 08:43:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:43:15 compute-0 ceph-mon[74357]: pgmap v573: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:43:15 compute-0 podman[186452]: 2026-01-27 08:43:15.145355973 +0000 UTC m=+0.081782641 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, version=2.2.4, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2)
Jan 27 08:43:15 compute-0 podman[186452]: 2026-01-27 08:43:15.161642624 +0000 UTC m=+0.098069282 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:15 compute-0 sudo[186151]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:43:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:15.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:43:15 compute-0 ceph-mgr[74650]: client.0 ms_handle_reset on v2:192.168.122.100:6800/510010839
Jan 27 08:43:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 sudo[186503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:16 compute-0 sudo[186503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:16 compute-0 sudo[186503]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:16 compute-0 sudo[186528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:43:16 compute-0 sudo[186528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:16 compute-0 sudo[186528]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:16 compute-0 sudo[186553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:16 compute-0 sudo[186553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:16 compute-0 sudo[186553]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:16 compute-0 sudo[186578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:43:16 compute-0 sudo[186578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:43:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:43:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 ceph-mon[74357]: pgmap v574: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:16 compute-0 sudo[186578]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.434141) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503397434177, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1744, "num_deletes": 251, "total_data_size": 3257474, "memory_usage": 3311216, "flush_reason": "Manual Compaction"}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503397451067, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 3200824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12335, "largest_seqno": 14078, "table_properties": {"data_size": 3192864, "index_size": 4839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14549, "raw_average_key_size": 18, "raw_value_size": 3177256, "raw_average_value_size": 4001, "num_data_blocks": 218, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503206, "oldest_key_time": 1769503206, "file_creation_time": 1769503397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 16984 microseconds, and 6682 cpu microseconds.
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.451120) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 3200824 bytes OK
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.451142) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.452767) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.452787) EVENT_LOG_v1 {"time_micros": 1769503397452781, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.452807) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3250391, prev total WAL file size 3250391, number of live WAL files 2.
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.453615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(3125KB)], [29(8259KB)]
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503397453672, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 11658046, "oldest_snapshot_seqno": -1}
Jan 27 08:43:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:17.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4283 keys, 11120137 bytes, temperature: kUnknown
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503397509537, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11120137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11086225, "index_size": 22090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 104357, "raw_average_key_size": 24, "raw_value_size": 11003645, "raw_average_value_size": 2569, "num_data_blocks": 942, "num_entries": 4283, "num_filter_entries": 4283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.509957) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11120137 bytes
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.511596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.1 rd, 198.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 4800, records dropped: 517 output_compression: NoCompression
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.511632) EVENT_LOG_v1 {"time_micros": 1769503397511615, "job": 12, "event": "compaction_finished", "compaction_time_micros": 56008, "compaction_time_cpu_micros": 27421, "output_level": 6, "num_output_files": 1, "total_output_size": 11120137, "num_input_records": 4800, "num_output_records": 4283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503397512990, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503397516051, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.453547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.516187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.516192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.516193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.516195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:43:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:43:17.516197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:43:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:43:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:43:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a5173cf6-b6f6-4ac8-8f1a-16eb3437624d does not exist
Jan 27 08:43:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 43a6cd79-9b87-4bf3-a12e-8ef06a602c8d does not exist
Jan 27 08:43:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0bdaa749-032e-4b7f-893e-7061c79a037a does not exist
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:43:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:43:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:43:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:43:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:43:17 compute-0 sudo[186635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:17 compute-0 sudo[186635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:17 compute-0 sudo[186635]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:17 compute-0 sudo[186666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:43:17 compute-0 sudo[186666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:17 compute-0 sudo[186666]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:17 compute-0 sudo[186704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:17 compute-0 sudo[186704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:17 compute-0 sudo[186704]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:17 compute-0 podman[186659]: 2026-01-27 08:43:17.991004656 +0000 UTC m=+0.190647632 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:43:18 compute-0 sudo[186735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:43:18 compute-0 sudo[186735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.363756861 +0000 UTC m=+0.045156449 container create 1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mayer, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:43:18 compute-0 systemd[1]: Started libpod-conmon-1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186.scope.
Jan 27 08:43:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.343492811 +0000 UTC m=+0.024892409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.441642844 +0000 UTC m=+0.123042452 container init 1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mayer, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.449794219 +0000 UTC m=+0.131193807 container start 1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mayer, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.455282291 +0000 UTC m=+0.136681909 container attach 1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mayer, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:43:18 compute-0 quirky_mayer[186817]: 167 167
Jan 27 08:43:18 compute-0 ceph-mon[74357]: pgmap v575: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:43:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:43:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:43:18 compute-0 systemd[1]: libpod-1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186.scope: Deactivated successfully.
Jan 27 08:43:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:43:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.457507002 +0000 UTC m=+0.138906610 container died 1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d1b21a45eecae13040959bfd2f492f222671be55f0eed7ac74aceb9512a83d5-merged.mount: Deactivated successfully.
Jan 27 08:43:18 compute-0 podman[186801]: 2026-01-27 08:43:18.502783414 +0000 UTC m=+0.184183002 container remove 1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:43:18 compute-0 sudo[186820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:18 compute-0 sudo[186820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:18 compute-0 sudo[186820]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:18 compute-0 systemd[1]: libpod-conmon-1e11cfca2d070dadbb2406a0f15d1549ee5123e0d203f2501db9a15d67301186.scope: Deactivated successfully.
Jan 27 08:43:18 compute-0 sudo[186858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:18 compute-0 sudo[186858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:18 compute-0 sudo[186858]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:18 compute-0 podman[186889]: 2026-01-27 08:43:18.671384786 +0000 UTC m=+0.046514388 container create b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:43:18 compute-0 systemd[1]: Started libpod-conmon-b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a.scope.
Jan 27 08:43:18 compute-0 podman[186889]: 2026-01-27 08:43:18.650426716 +0000 UTC m=+0.025556408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:43:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864ba14767bc18fad98e891c9cc2986d25184ff7b814bcfd0f82cd9dd52c38d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864ba14767bc18fad98e891c9cc2986d25184ff7b814bcfd0f82cd9dd52c38d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864ba14767bc18fad98e891c9cc2986d25184ff7b814bcfd0f82cd9dd52c38d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864ba14767bc18fad98e891c9cc2986d25184ff7b814bcfd0f82cd9dd52c38d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864ba14767bc18fad98e891c9cc2986d25184ff7b814bcfd0f82cd9dd52c38d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:18 compute-0 podman[186889]: 2026-01-27 08:43:18.767556474 +0000 UTC m=+0.142686076 container init b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:43:18 compute-0 podman[186889]: 2026-01-27 08:43:18.774415023 +0000 UTC m=+0.149544655 container start b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:43:18 compute-0 podman[186889]: 2026-01-27 08:43:18.778277291 +0000 UTC m=+0.153406923 container attach b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:43:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:19.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:19 compute-0 festive_mccarthy[186906]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:43:19 compute-0 festive_mccarthy[186906]: --> relative data size: 1.0
Jan 27 08:43:19 compute-0 festive_mccarthy[186906]: --> All data devices are unavailable
Jan 27 08:43:19 compute-0 systemd[1]: libpod-b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a.scope: Deactivated successfully.
Jan 27 08:43:19 compute-0 podman[186889]: 2026-01-27 08:43:19.679871657 +0000 UTC m=+1.055001259 container died b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-864ba14767bc18fad98e891c9cc2986d25184ff7b814bcfd0f82cd9dd52c38d1-merged.mount: Deactivated successfully.
Jan 27 08:43:19 compute-0 podman[186889]: 2026-01-27 08:43:19.73060966 +0000 UTC m=+1.105739262 container remove b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:43:19 compute-0 systemd[1]: libpod-conmon-b4b5c9d0fde96085e181a3dae1ac4295aac650b95a5f5686cfb7e2391cfd1e0a.scope: Deactivated successfully.
Jan 27 08:43:19 compute-0 sudo[186735]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:19 compute-0 sudo[186936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:19 compute-0 sudo[186936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:19 compute-0 sudo[186936]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:19 compute-0 sudo[186961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:43:19 compute-0 sudo[186961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:19 compute-0 sudo[186961]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:19 compute-0 sudo[186986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:19 compute-0 sudo[186986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:19 compute-0 sudo[186986]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:19 compute-0 sudo[187011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:43:19 compute-0 sudo[187011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.266978138 +0000 UTC m=+0.048060260 container create 14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:43:20 compute-0 systemd[1]: Started libpod-conmon-14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e.scope.
Jan 27 08:43:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.25295273 +0000 UTC m=+0.034034872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.350323002 +0000 UTC m=+0.131405174 container init 14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.358677753 +0000 UTC m=+0.139759875 container start 14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:43:20 compute-0 strange_dijkstra[187092]: 167 167
Jan 27 08:43:20 compute-0 systemd[1]: libpod-14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e.scope: Deactivated successfully.
Jan 27 08:43:20 compute-0 conmon[187092]: conmon 14840a53615bc0b6a677 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e.scope/container/memory.events
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.36546203 +0000 UTC m=+0.146544202 container attach 14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.366074308 +0000 UTC m=+0.147156450 container died 14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b1efb05e347e4596d8edcf329a21a4483a5ffc817c6cbc6b60a7d4e25a389f1-merged.mount: Deactivated successfully.
Jan 27 08:43:20 compute-0 podman[187076]: 2026-01-27 08:43:20.415031041 +0000 UTC m=+0.196113163 container remove 14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:20 compute-0 systemd[1]: libpod-conmon-14840a53615bc0b6a677efcb7293b29d9a3f05d73f9214f5bedf6c45300d9f8e.scope: Deactivated successfully.
Jan 27 08:43:20 compute-0 podman[187117]: 2026-01-27 08:43:20.593254459 +0000 UTC m=+0.046819716 container create 7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:43:20 compute-0 ceph-mon[74357]: pgmap v576: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:20 compute-0 systemd[1]: Started libpod-conmon-7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923.scope.
Jan 27 08:43:20 compute-0 podman[187117]: 2026-01-27 08:43:20.573576115 +0000 UTC m=+0.027141402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:43:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9abf0b6b61c7a5cf6db8ba54137ed319e65a86b31f0e2e48243523090640ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9abf0b6b61c7a5cf6db8ba54137ed319e65a86b31f0e2e48243523090640ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9abf0b6b61c7a5cf6db8ba54137ed319e65a86b31f0e2e48243523090640ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9abf0b6b61c7a5cf6db8ba54137ed319e65a86b31f0e2e48243523090640ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:20 compute-0 podman[187117]: 2026-01-27 08:43:20.692819741 +0000 UTC m=+0.146385028 container init 7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:43:20 compute-0 podman[187117]: 2026-01-27 08:43:20.704931535 +0000 UTC m=+0.158496832 container start 7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:43:20 compute-0 podman[187117]: 2026-01-27 08:43:20.710291104 +0000 UTC m=+0.163856421 container attach 7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:43:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:21.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:21 compute-0 goofy_swartz[187134]: {
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:     "0": [
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:         {
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "devices": [
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "/dev/loop3"
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             ],
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "lv_name": "ceph_lv0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "lv_size": "7511998464",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "name": "ceph_lv0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "tags": {
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.cluster_name": "ceph",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.crush_device_class": "",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.encrypted": "0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.osd_id": "0",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.type": "block",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:                 "ceph.vdo": "0"
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             },
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "type": "block",
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:             "vg_name": "ceph_vg0"
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:         }
Jan 27 08:43:21 compute-0 goofy_swartz[187134]:     ]
Jan 27 08:43:21 compute-0 goofy_swartz[187134]: }
Jan 27 08:43:21 compute-0 systemd[1]: libpod-7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923.scope: Deactivated successfully.
Jan 27 08:43:21 compute-0 podman[187117]: 2026-01-27 08:43:21.45308617 +0000 UTC m=+0.906651427 container died 7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a9abf0b6b61c7a5cf6db8ba54137ed319e65a86b31f0e2e48243523090640ff-merged.mount: Deactivated successfully.
Jan 27 08:43:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:21.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:21 compute-0 podman[187117]: 2026-01-27 08:43:21.508406899 +0000 UTC m=+0.961972146 container remove 7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:43:21 compute-0 systemd[1]: libpod-conmon-7b6564c76c689f12d6f5da5a5f226be2fa7205b5991420847b18e5206f5fb923.scope: Deactivated successfully.
Jan 27 08:43:21 compute-0 sudo[187011]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:21 compute-0 sudo[187159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:21 compute-0 sudo[187159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:21 compute-0 sudo[187159]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:21 compute-0 sudo[187185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:43:21 compute-0 sudo[187185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:21 compute-0 sudo[187185]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:21 compute-0 sudo[187212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:21 compute-0 sudo[187212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:21 compute-0 sudo[187212]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:21 compute-0 sudo[187237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:43:21 compute-0 sudo[187237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.039718698 +0000 UTC m=+0.017897526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.369840435 +0000 UTC m=+0.348019243 container create 985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:43:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:22 compute-0 systemd[1]: Started libpod-conmon-985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff.scope.
Jan 27 08:43:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.5480303 +0000 UTC m=+0.526209138 container init 985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bouman, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.554043858 +0000 UTC m=+0.532222656 container start 985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bouman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:22 compute-0 wonderful_bouman[187318]: 167 167
Jan 27 08:43:22 compute-0 systemd[1]: libpod-985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff.scope: Deactivated successfully.
Jan 27 08:43:22 compute-0 conmon[187318]: conmon 985b10b52e29184a0c6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff.scope/container/memory.events
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.570307016 +0000 UTC m=+0.548485834 container attach 985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.571097059 +0000 UTC m=+0.549275867 container died 985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-93240f13d24e0f2c652d2fa2661bb74fe25422bc328417e83d73d84ef295ea12-merged.mount: Deactivated successfully.
Jan 27 08:43:22 compute-0 podman[187302]: 2026-01-27 08:43:22.625626936 +0000 UTC m=+0.603805744 container remove 985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:43:22 compute-0 systemd[1]: libpod-conmon-985b10b52e29184a0c6b37a82ca9b1da38b235def41a42aeb4e9b0e32414fcff.scope: Deactivated successfully.
Jan 27 08:43:22 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:43:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:43:22 compute-0 podman[187344]: 2026-01-27 08:43:22.801502409 +0000 UTC m=+0.054493729 container create c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:43:22 compute-0 systemd[1]: Started libpod-conmon-c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af.scope.
Jan 27 08:43:22 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 27 08:43:22 compute-0 podman[187344]: 2026-01-27 08:43:22.769332479 +0000 UTC m=+0.022323809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:43:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed7f3517d409ca03aab9369a9f9cc55b4a01155bb2b66bbc961bde3c73b97a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed7f3517d409ca03aab9369a9f9cc55b4a01155bb2b66bbc961bde3c73b97a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed7f3517d409ca03aab9369a9f9cc55b4a01155bb2b66bbc961bde3c73b97a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed7f3517d409ca03aab9369a9f9cc55b4a01155bb2b66bbc961bde3c73b97a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:22 compute-0 podman[187344]: 2026-01-27 08:43:22.895502737 +0000 UTC m=+0.148494077 container init c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:43:22 compute-0 podman[187344]: 2026-01-27 08:43:22.902155851 +0000 UTC m=+0.155147171 container start c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:43:22 compute-0 podman[187344]: 2026-01-27 08:43:22.905988207 +0000 UTC m=+0.158979527 container attach c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:43:22 compute-0 ceph-mon[74357]: pgmap v577: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:23 compute-0 hungry_greider[187361]: {
Jan 27 08:43:23 compute-0 hungry_greider[187361]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:43:23 compute-0 hungry_greider[187361]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:43:23 compute-0 hungry_greider[187361]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:43:23 compute-0 hungry_greider[187361]:         "osd_id": 0,
Jan 27 08:43:23 compute-0 hungry_greider[187361]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:43:23 compute-0 hungry_greider[187361]:         "type": "bluestore"
Jan 27 08:43:23 compute-0 hungry_greider[187361]:     }
Jan 27 08:43:23 compute-0 hungry_greider[187361]: }
Jan 27 08:43:23 compute-0 systemd[1]: libpod-c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af.scope: Deactivated successfully.
Jan 27 08:43:23 compute-0 podman[187344]: 2026-01-27 08:43:23.753966161 +0000 UTC m=+1.006957481 container died c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ed7f3517d409ca03aab9369a9f9cc55b4a01155bb2b66bbc961bde3c73b97a7-merged.mount: Deactivated successfully.
Jan 27 08:43:23 compute-0 podman[187344]: 2026-01-27 08:43:23.837662695 +0000 UTC m=+1.090654015 container remove c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:43:23 compute-0 systemd[1]: libpod-conmon-c71c0d8dadd14556e4cdfb321d0aa64ccd90ea60e72382b63519e56ebae636af.scope: Deactivated successfully.
Jan 27 08:43:23 compute-0 sudo[187237]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:43:23 compute-0 groupadd[187404]: group added to /etc/group: name=dnsmasq, GID=992
Jan 27 08:43:23 compute-0 groupadd[187404]: group added to /etc/gshadow: name=dnsmasq
Jan 27 08:43:23 compute-0 groupadd[187404]: new group: name=dnsmasq, GID=992
Jan 27 08:43:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:43:23 compute-0 useradd[187411]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 27 08:43:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d8139f01-f390-4ca2-aeb4-160750c70c0f does not exist
Jan 27 08:43:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b1156e42-4a4d-4f3e-bb11-754f9a4ebe8a does not exist
Jan 27 08:43:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 64ba0ff3-1d69-4b61-a080-aff00afd472a does not exist
Jan 27 08:43:23 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:43:24 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Jan 27 08:43:24 compute-0 sudo[187412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:24 compute-0 sudo[187412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:24 compute-0 sudo[187412]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:24 compute-0 sudo[187446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:43:24 compute-0 sudo[187446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:24 compute-0 sudo[187446]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:43:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:43:24 compute-0 groupadd[187474]: group added to /etc/group: name=clevis, GID=991
Jan 27 08:43:24 compute-0 groupadd[187474]: group added to /etc/gshadow: name=clevis
Jan 27 08:43:24 compute-0 groupadd[187474]: new group: name=clevis, GID=991
Jan 27 08:43:24 compute-0 ceph-mon[74357]: pgmap v578: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:43:24 compute-0 useradd[187481]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 27 08:43:25 compute-0 usermod[187491]: add 'clevis' to group 'tss'
Jan 27 08:43:25 compute-0 usermod[187491]: add 'clevis' to shadow group 'tss'
Jan 27 08:43:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:25 compute-0 podman[187505]: 2026-01-27 08:43:25.621731558 +0000 UTC m=+0.065964075 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:43:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:27 compute-0 ceph-mon[74357]: pgmap v579: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:27.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:27.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:28 compute-0 polkitd[43485]: Reloading rules
Jan 27 08:43:28 compute-0 polkitd[43485]: Collecting garbage unconditionally...
Jan 27 08:43:28 compute-0 polkitd[43485]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 08:43:28 compute-0 polkitd[43485]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 08:43:28 compute-0 polkitd[43485]: Finished loading, compiling and executing 3 rules
Jan 27 08:43:28 compute-0 polkitd[43485]: Reloading rules
Jan 27 08:43:28 compute-0 polkitd[43485]: Collecting garbage unconditionally...
Jan 27 08:43:28 compute-0 polkitd[43485]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 08:43:28 compute-0 polkitd[43485]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 08:43:28 compute-0 polkitd[43485]: Finished loading, compiling and executing 3 rules
Jan 27 08:43:28 compute-0 ceph-mon[74357]: pgmap v580: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:29 compute-0 groupadd[187704]: group added to /etc/group: name=ceph, GID=167
Jan 27 08:43:29 compute-0 groupadd[187704]: group added to /etc/gshadow: name=ceph
Jan 27 08:43:29 compute-0 groupadd[187704]: new group: name=ceph, GID=167
Jan 27 08:43:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:29 compute-0 useradd[187710]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 27 08:43:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:29.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:30 compute-0 ceph-mon[74357]: pgmap v581: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:31.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:31.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:32 compute-0 sshd[1008]: Received signal 15; terminating.
Jan 27 08:43:32 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 27 08:43:32 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 27 08:43:32 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 27 08:43:32 compute-0 systemd[1]: sshd.service: Consumed 2.987s CPU time, read 32.0K from disk, written 124.0K to disk.
Jan 27 08:43:32 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 27 08:43:32 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 27 08:43:32 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 08:43:32 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 08:43:32 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 08:43:32 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 27 08:43:32 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 27 08:43:32 compute-0 sshd[188336]: Server listening on 0.0.0.0 port 22.
Jan 27 08:43:32 compute-0 sshd[188336]: Server listening on :: port 22.
Jan 27 08:43:32 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 27 08:43:32 compute-0 ceph-mon[74357]: pgmap v582: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:33.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:33.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:34 compute-0 ceph-mon[74357]: pgmap v583: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:43:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:43:34 compute-0 systemd[1]: Reloading.
Jan 27 08:43:34 compute-0 systemd-rc-local-generator[188599]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:34 compute-0 systemd-sysv-generator[188603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:43:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:35.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:37 compute-0 sudo[167754]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:37 compute-0 ceph-mon[74357]: pgmap v584: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:37.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:37.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:38 compute-0 ceph-mon[74357]: pgmap v585: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:38 compute-0 sudo[192861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:38 compute-0 sudo[192861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:38 compute-0 sudo[192861]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:38 compute-0 sudo[192957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:38 compute-0 sudo[192957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:38 compute-0 sudo[192957]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:39.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:40 compute-0 ceph-mon[74357]: pgmap v586: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:42 compute-0 ceph-mon[74357]: pgmap v587: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:43:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:43:43 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.414s CPU time.
Jan 27 08:43:43 compute-0 systemd[1]: run-r17e5dd51413247fea1b59ae210398aae.service: Deactivated successfully.
Jan 27 08:43:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:44 compute-0 ceph-mon[74357]: pgmap v588: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:43:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:43:46 compute-0 ceph-mon[74357]: pgmap v589: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 08:43:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 08:43:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:48 compute-0 sudo[197200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szbafzbfdtqehlpiasmzbphhquosmmzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503427.5059593-968-9143962974013/AnsiballZ_systemd.py'
Jan 27 08:43:48 compute-0 sudo[197200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:48 compute-0 podman[197157]: 2026-01-27 08:43:48.245244778 +0000 UTC m=+0.143348287 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:43:48 compute-0 python3.9[197209]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:43:48 compute-0 systemd[1]: Reloading.
Jan 27 08:43:48 compute-0 systemd-rc-local-generator[197238]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:48 compute-0 systemd-sysv-generator[197242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:48 compute-0 sudo[197200]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:49 compute-0 ceph-mon[74357]: pgmap v590: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:49.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:49 compute-0 sudo[197399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvipnzpyjjyfpliglyriraojvmdnxej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503429.0532959-968-105285911006038/AnsiballZ_systemd.py'
Jan 27 08:43:49 compute-0 sudo[197399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:49 compute-0 python3.9[197401]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:43:49 compute-0 systemd[1]: Reloading.
Jan 27 08:43:49 compute-0 systemd-rc-local-generator[197432]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:49 compute-0 systemd-sysv-generator[197435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:50 compute-0 sudo[197399]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:50 compute-0 ceph-mon[74357]: pgmap v591: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:50 compute-0 sudo[197590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjspnwswbgwhjptdvlrzarnnpfttankv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503430.281017-968-218581181705952/AnsiballZ_systemd.py'
Jan 27 08:43:50 compute-0 sudo[197590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:50 compute-0 python3.9[197592]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:43:50 compute-0 systemd[1]: Reloading.
Jan 27 08:43:51 compute-0 systemd-sysv-generator[197625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:51 compute-0 systemd-rc-local-generator[197621]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:51 compute-0 sudo[197590]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:51.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:51 compute-0 sudo[197781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujvvbvnwpwxburrpoptnfzydjbbchvox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503431.5791073-968-84923853378388/AnsiballZ_systemd.py'
Jan 27 08:43:51 compute-0 sudo[197781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:52 compute-0 python3.9[197783]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:43:52 compute-0 systemd[1]: Reloading.
Jan 27 08:43:52 compute-0 systemd-rc-local-generator[197811]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:52 compute-0 systemd-sysv-generator[197814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:52 compute-0 ceph-mon[74357]: pgmap v592: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:52 compute-0 sudo[197781]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:53.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:53.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:54 compute-0 sudo[197972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkasrfvgzyymvhkklbirdclhddkgaaqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503433.855993-1055-120565276869989/AnsiballZ_systemd.py'
Jan 27 08:43:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:43:54.226 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:43:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:43:54.226 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:43:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:43:54.226 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:43:54 compute-0 sudo[197972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:54 compute-0 ceph-mon[74357]: pgmap v593: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:54 compute-0 python3.9[197974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:43:54 compute-0 systemd[1]: Reloading.
Jan 27 08:43:54 compute-0 systemd-rc-local-generator[198004]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:54 compute-0 systemd-sysv-generator[198007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:54 compute-0 sudo[197972]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:55 compute-0 sudo[198162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riregzzpurxpyrhowznktgodofbiwven ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503435.009045-1055-80056556037566/AnsiballZ_systemd.py'
Jan 27 08:43:55 compute-0 sudo[198162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:55.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:55.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:55 compute-0 python3.9[198164]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:43:55 compute-0 systemd[1]: Reloading.
Jan 27 08:43:55 compute-0 podman[198166]: 2026-01-27 08:43:55.720844091 +0000 UTC m=+0.055364892 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 08:43:55 compute-0 systemd-rc-local-generator[198213]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:55 compute-0 systemd-sysv-generator[198216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:56 compute-0 sudo[198162]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:56 compute-0 sudo[198370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdgxhqysuwbecqnksnkfkazdqdodqgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503436.163096-1055-201262787290279/AnsiballZ_systemd.py'
Jan 27 08:43:56 compute-0 sudo[198370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:56 compute-0 ceph-mon[74357]: pgmap v594: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:56 compute-0 python3.9[198372]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:43:56 compute-0 systemd[1]: Reloading.
Jan 27 08:43:56 compute-0 systemd-rc-local-generator[198404]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:56 compute-0 systemd-sysv-generator[198407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:57 compute-0 sudo[198370]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:57.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:43:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:57.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:57 compute-0 sudo[198562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfznvrafrlnxffpyefytlptxnzpustsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503437.4116058-1055-146128858675309/AnsiballZ_systemd.py'
Jan 27 08:43:57 compute-0 sudo[198562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:58 compute-0 python3.9[198564]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:43:58 compute-0 sudo[198562]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:58 compute-0 ceph-mon[74357]: pgmap v595: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:58 compute-0 sudo[198718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgmzeqewkyjegenkweygemjksjbnxshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503438.4371548-1055-264925922132331/AnsiballZ_systemd.py'
Jan 27 08:43:58 compute-0 sudo[198718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:43:58 compute-0 sudo[198717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:58 compute-0 sudo[198717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:58 compute-0 sudo[198717]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:58 compute-0 sudo[198745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:43:58 compute-0 sudo[198745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:43:58 compute-0 sudo[198745]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:59 compute-0 python3.9[198739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:43:59 compute-0 systemd[1]: Reloading.
Jan 27 08:43:59 compute-0 systemd-sysv-generator[198804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:43:59 compute-0 systemd-rc-local-generator[198799]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:43:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:43:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:43:59.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:43:59 compute-0 sudo[198718]: pam_unix(sudo:session): session closed for user root
Jan 27 08:43:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:43:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:43:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:43:59.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:00 compute-0 ceph-mon[74357]: pgmap v596: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:00 compute-0 sudo[198959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlesuaxszimkmtqedzzqeouivdxxuin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503440.423046-1163-240698633734239/AnsiballZ_systemd.py'
Jan 27 08:44:00 compute-0 sudo[198959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:01 compute-0 python3.9[198961]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:44:01 compute-0 systemd[1]: Reloading.
Jan 27 08:44:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:01 compute-0 systemd-sysv-generator[198998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:44:01 compute-0 systemd-rc-local-generator[198994]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:44:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:01.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:01 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 27 08:44:01 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 27 08:44:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:01.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:01 compute-0 sudo[198959]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:02 compute-0 sudo[199153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnudxbmzzuiwcnuywcygxvvlmwvzjykh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503442.0021634-1187-30819809737923/AnsiballZ_systemd.py'
Jan 27 08:44:02 compute-0 sudo[199153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:02 compute-0 ceph-mon[74357]: pgmap v597: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:02 compute-0 python3.9[199155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:02 compute-0 sudo[199153]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:03 compute-0 sudo[199309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnaxkmqnhyvwhaolmphgleeqpddyvhxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503442.8506598-1187-176457477560398/AnsiballZ_systemd.py'
Jan 27 08:44:03 compute-0 sudo[199309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:03.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:03.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:03 compute-0 python3.9[199311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:03 compute-0 sudo[199309]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:04 compute-0 sudo[199464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhvjaomhejmtzdufborxzpikntxzffdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503443.7986863-1187-135891606638381/AnsiballZ_systemd.py'
Jan 27 08:44:04 compute-0 sudo[199464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:04 compute-0 python3.9[199466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:04 compute-0 sudo[199464]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:04 compute-0 ceph-mon[74357]: pgmap v598: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:05 compute-0 sudo[199619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczyqgmvzmwbsmtprioshmgpbsbdfscw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503444.6777654-1187-99723106340071/AnsiballZ_systemd.py'
Jan 27 08:44:05 compute-0 sudo[199619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:05 compute-0 python3.9[199621]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:05.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:05 compute-0 sudo[199619]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:05.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:05 compute-0 sudo[199775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adzjoaolipogdmkygsvxkpdmpyjznnme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503445.504176-1187-7148525786686/AnsiballZ_systemd.py'
Jan 27 08:44:05 compute-0 sudo[199775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:06 compute-0 python3.9[199777]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:06 compute-0 sudo[199775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:06 compute-0 ceph-mon[74357]: pgmap v599: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:06 compute-0 sudo[199930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfscxrclemppjevtrujojghrbsfjxemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503446.4571857-1187-74593672272703/AnsiballZ_systemd.py'
Jan 27 08:44:06 compute-0 sudo[199930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:07 compute-0 python3.9[199932]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:07 compute-0 sudo[199930]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:07 compute-0 sudo[200086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azzxhkbgbfktdewamtmutmxxzcqizkuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503447.317651-1187-162534787478251/AnsiballZ_systemd.py'
Jan 27 08:44:07 compute-0 sudo[200086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:08 compute-0 python3.9[200088]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:08 compute-0 sudo[200086]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:08 compute-0 sudo[200241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pygfwpqtqydoockhjallqwlseeggmgio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503448.2837753-1187-178475937762710/AnsiballZ_systemd.py'
Jan 27 08:44:08 compute-0 sudo[200241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:08 compute-0 ceph-mon[74357]: pgmap v600: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:08 compute-0 python3.9[200243]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:09 compute-0 sudo[200241]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:09 compute-0 sudo[200397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkdewbxvhwdjahgijpriozdkjwojvvnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503449.217014-1187-118905658355096/AnsiballZ_systemd.py'
Jan 27 08:44:09 compute-0 sudo[200397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:09 compute-0 python3.9[200399]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:09 compute-0 sudo[200397]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:10 compute-0 sudo[200552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcqhikkuapzduxbqmujzqltfzoqmvqps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503450.0761597-1187-20430320114168/AnsiballZ_systemd.py'
Jan 27 08:44:10 compute-0 sudo[200552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:10 compute-0 python3.9[200554]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:10 compute-0 ceph-mon[74357]: pgmap v601: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:10 compute-0 sudo[200552]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:11 compute-0 sudo[200708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfnrcqrrwssyxlobjnbefihigujmzvxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503450.8696973-1187-70923861900971/AnsiballZ_systemd.py'
Jan 27 08:44:11 compute-0 sudo[200708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:11 compute-0 python3.9[200710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:11 compute-0 sudo[200708]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:12 compute-0 sudo[200863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyfrmoikpdwuzpzxeltwtxsdczmzzbie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503451.6832159-1187-197318989094572/AnsiballZ_systemd.py'
Jan 27 08:44:12 compute-0 sudo[200863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:12 compute-0 python3.9[200865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:12 compute-0 sudo[200863]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:12 compute-0 ceph-mon[74357]: pgmap v602: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:12 compute-0 sudo[201018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcklndmyyiyzbcacutobexykbggcisxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503452.4724448-1187-156398284176707/AnsiballZ_systemd.py'
Jan 27 08:44:12 compute-0 sudo[201018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:13 compute-0 python3.9[201020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:13 compute-0 sudo[201018]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:13 compute-0 sudo[201174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbibazntorfesffqegfdusdpliokdywm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503453.4447002-1187-243737130383332/AnsiballZ_systemd.py'
Jan 27 08:44:13 compute-0 sudo[201174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:14 compute-0 python3.9[201176]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:44:14 compute-0 sudo[201174]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:14 compute-0 ceph-mon[74357]: pgmap v603: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:44:14
Jan 27 08:44:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:44:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:44:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms']
Jan 27 08:44:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:44:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:16 compute-0 sudo[201330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dglrgpkwfnbolajvsemumispfeerydyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503455.8287737-1493-224010362314799/AnsiballZ_file.py'
Jan 27 08:44:16 compute-0 sudo[201330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:16 compute-0 python3.9[201332]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:44:16 compute-0 sudo[201330]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:16 compute-0 sudo[201482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwjwzkvxetzkatqocxovpjtlsfciavg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503456.4727066-1493-66232601183952/AnsiballZ_file.py'
Jan 27 08:44:16 compute-0 sudo[201482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:16 compute-0 python3.9[201484]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:44:16 compute-0 sudo[201482]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:17 compute-0 ceph-mon[74357]: pgmap v604: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:17 compute-0 sudo[201635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvwjkxuukctstfbogfegtejefekkdaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503457.104787-1493-259599469534204/AnsiballZ_file.py'
Jan 27 08:44:17 compute-0 sudo[201635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:17 compute-0 python3.9[201637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:44:17 compute-0 sudo[201635]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:17.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:18 compute-0 sudo[201787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-serfijzmzxkqilfoajpjrpzheanatfii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503457.7108436-1493-205638334216989/AnsiballZ_file.py'
Jan 27 08:44:18 compute-0 sudo[201787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:18 compute-0 python3.9[201789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:44:18 compute-0 sudo[201787]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:18 compute-0 ceph-mon[74357]: pgmap v605: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:18 compute-0 sudo[201952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmujsshibrcqqgwfpcckdofujajflrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503458.3966017-1493-59681118274730/AnsiballZ_file.py'
Jan 27 08:44:18 compute-0 sudo[201952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:18 compute-0 podman[201913]: 2026-01-27 08:44:18.941622049 +0000 UTC m=+0.097553240 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 27 08:44:18 compute-0 sudo[201968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:18 compute-0 sudo[201968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:18 compute-0 sudo[201968]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:19 compute-0 sudo[201993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:19 compute-0 sudo[201993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:19 compute-0 sudo[201993]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:19 compute-0 python3.9[201960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:44:19 compute-0 sudo[201952]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:19 compute-0 sudo[202168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idyhjjcrvquczubiduirxcibpaxhbahr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503459.284277-1493-35314896455058/AnsiballZ_file.py'
Jan 27 08:44:19 compute-0 sudo[202168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:19 compute-0 python3.9[202170]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:44:19 compute-0 sudo[202168]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:20 compute-0 ceph-mon[74357]: pgmap v606: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:21.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:21 compute-0 python3.9[202321]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:44:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:21.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:22 compute-0 sudo[202471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpnlstqulndffmdnvewndksvqszhryhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503461.738798-1646-148255110661351/AnsiballZ_stat.py'
Jan 27 08:44:22 compute-0 sudo[202471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:22 compute-0 python3.9[202473]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:22 compute-0 sudo[202471]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:22 compute-0 sudo[202596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzmvexiotswnfyoelbhayakjkfnjjhjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503461.738798-1646-148255110661351/AnsiballZ_copy.py'
Jan 27 08:44:23 compute-0 sudo[202596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:23 compute-0 ceph-mon[74357]: pgmap v607: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:23 compute-0 python3.9[202598]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503461.738798-1646-148255110661351/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:23 compute-0 sudo[202596]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:23 compute-0 sudo[202749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythqmalkasfbspeojayoaxxzngdsdbyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503463.4072566-1646-88465833056896/AnsiballZ_stat.py'
Jan 27 08:44:23 compute-0 sudo[202749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:23 compute-0 python3.9[202751]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:23 compute-0 sudo[202749]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:44:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:44:24 compute-0 sudo[202874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vclisebwugxzzbwkdafwtljkpsdvcwfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503463.4072566-1646-88465833056896/AnsiballZ_copy.py'
Jan 27 08:44:24 compute-0 sudo[202874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:24 compute-0 sudo[202877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:24 compute-0 sudo[202877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:24 compute-0 sudo[202877]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:24 compute-0 sudo[202902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:44:24 compute-0 sudo[202902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:24 compute-0 sudo[202902]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:24 compute-0 sudo[202927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:24 compute-0 sudo[202927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:24 compute-0 sudo[202927]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:24 compute-0 python3.9[202876]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503463.4072566-1646-88465833056896/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:24 compute-0 sudo[202874]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:24 compute-0 sudo[202952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:44:24 compute-0 sudo[202952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:25 compute-0 ceph-mon[74357]: pgmap v608: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:25 compute-0 sudo[202952]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:25 compute-0 sudo[203158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbjusxpylccsypbckardhsxjpcxvwdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503464.9745536-1646-42826598280202/AnsiballZ_stat.py'
Jan 27 08:44:25 compute-0 sudo[203158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:44:25 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:44:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:44:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:44:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:44:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:25.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:25 compute-0 python3.9[203160]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:44:25 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 812b2eb6-b42a-4e54-8d48-e571a478df98 does not exist
Jan 27 08:44:25 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5224042f-2234-4642-91c5-126d7edb364d does not exist
Jan 27 08:44:25 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 4fc3b091-ce34-4b77-92ba-43c3f43d0da8 does not exist
Jan 27 08:44:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:44:25 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:44:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:44:25 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:44:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:44:25 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:44:25 compute-0 sudo[203158]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:25 compute-0 sudo[203163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:25 compute-0 sudo[203163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:25 compute-0 sudo[203163]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:25.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:25 compute-0 sudo[203211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:44:25 compute-0 sudo[203211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:25 compute-0 sudo[203211]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:25 compute-0 sudo[203260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:25 compute-0 sudo[203260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:25 compute-0 sudo[203260]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:25 compute-0 sudo[203305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:44:25 compute-0 sudo[203305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:25 compute-0 sudo[203395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zykaqwxxiqitexkqyvvqcrlahzspjndy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503464.9745536-1646-42826598280202/AnsiballZ_copy.py'
Jan 27 08:44:25 compute-0 sudo[203395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:26 compute-0 python3.9[203399]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503464.9745536-1646-42826598280202/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:26 compute-0 podman[203425]: 2026-01-27 08:44:26.05515616 +0000 UTC m=+0.029953163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:44:26 compute-0 sudo[203395]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:26 compute-0 podman[203425]: 2026-01-27 08:44:26.194666461 +0000 UTC m=+0.169463484 container create d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_golick, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:44:26 compute-0 systemd[1]: Started libpod-conmon-d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d.scope.
Jan 27 08:44:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:44:26 compute-0 podman[203425]: 2026-01-27 08:44:26.534251455 +0000 UTC m=+0.509048458 container init d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:44:26 compute-0 podman[203425]: 2026-01-27 08:44:26.549160305 +0000 UTC m=+0.523957288 container start d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 08:44:26 compute-0 eloquent_golick[203551]: 167 167
Jan 27 08:44:26 compute-0 systemd[1]: libpod-d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d.scope: Deactivated successfully.
Jan 27 08:44:26 compute-0 conmon[203551]: conmon d9f9db20f7ed12c8b899 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d.scope/container/memory.events
Jan 27 08:44:26 compute-0 podman[203425]: 2026-01-27 08:44:26.596397113 +0000 UTC m=+0.571194106 container attach d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 27 08:44:26 compute-0 podman[203425]: 2026-01-27 08:44:26.597346758 +0000 UTC m=+0.572143761 container died d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:44:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:44:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:44:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:44:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:44:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:44:26 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:44:26 compute-0 sudo[203625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecqgttqudmivlceisbhltgvllobzesdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503466.2885182-1646-199691980598692/AnsiballZ_stat.py'
Jan 27 08:44:26 compute-0 sudo[203625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:26 compute-0 podman[203439]: 2026-01-27 08:44:26.822703376 +0000 UTC m=+0.678990495 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:44:26 compute-0 python3.9[203627]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-61afad425153ce5ca3b0b957547ee6b6ca2293e453ab641ee1620e539d9ee45d-merged.mount: Deactivated successfully.
Jan 27 08:44:26 compute-0 sudo[203625]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:27 compute-0 sudo[203752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gatyhsoejufhqugtbwqjarxbhptlsecm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503466.2885182-1646-199691980598692/AnsiballZ_copy.py'
Jan 27 08:44:27 compute-0 sudo[203752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:27.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:27 compute-0 podman[203425]: 2026-01-27 08:44:27.593891061 +0000 UTC m=+1.568688044 container remove d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_golick, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:44:27 compute-0 systemd[1]: libpod-conmon-d9f9db20f7ed12c8b8996c1263b419186b46a1476a0db67fe3f31e8b4f72209d.scope: Deactivated successfully.
Jan 27 08:44:27 compute-0 python3.9[203754]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503466.2885182-1646-199691980598692/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:27 compute-0 sudo[203752]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:27 compute-0 ceph-mon[74357]: pgmap v609: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:27 compute-0 podman[203763]: 2026-01-27 08:44:27.795483937 +0000 UTC m=+0.026795037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:44:27 compute-0 podman[203763]: 2026-01-27 08:44:27.91217303 +0000 UTC m=+0.143484120 container create 3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_carson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:44:27 compute-0 systemd[1]: Started libpod-conmon-3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40.scope.
Jan 27 08:44:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73f77543049062b8e92910a8b19e4c23d1a0efcc708e034d641c36b9ec965c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73f77543049062b8e92910a8b19e4c23d1a0efcc708e034d641c36b9ec965c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73f77543049062b8e92910a8b19e4c23d1a0efcc708e034d641c36b9ec965c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73f77543049062b8e92910a8b19e4c23d1a0efcc708e034d641c36b9ec965c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73f77543049062b8e92910a8b19e4c23d1a0efcc708e034d641c36b9ec965c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:28 compute-0 podman[203763]: 2026-01-27 08:44:28.051140656 +0000 UTC m=+0.282451796 container init 3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:28 compute-0 podman[203763]: 2026-01-27 08:44:28.067973069 +0000 UTC m=+0.299284209 container start 3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 27 08:44:28 compute-0 podman[203763]: 2026-01-27 08:44:28.071991639 +0000 UTC m=+0.303302849 container attach 3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:28 compute-0 sudo[203932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uappvpzcvgtdfxicluypboyyzuuwatfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503467.9288435-1646-93860400942025/AnsiballZ_stat.py'
Jan 27 08:44:28 compute-0 sudo[203932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:28 compute-0 python3.9[203934]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:28 compute-0 sudo[203932]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:28 compute-0 sudo[204065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqybucvqglmteqpfivmfzpsormxfoqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503467.9288435-1646-93860400942025/AnsiballZ_copy.py'
Jan 27 08:44:28 compute-0 sudo[204065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:28 compute-0 cool_carson[203848]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:44:28 compute-0 cool_carson[203848]: --> relative data size: 1.0
Jan 27 08:44:28 compute-0 cool_carson[203848]: --> All data devices are unavailable
Jan 27 08:44:28 compute-0 podman[203763]: 2026-01-27 08:44:28.96357886 +0000 UTC m=+1.194889950 container died 3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_carson, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:44:28 compute-0 systemd[1]: libpod-3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40.scope: Deactivated successfully.
Jan 27 08:44:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d73f77543049062b8e92910a8b19e4c23d1a0efcc708e034d641c36b9ec965c7-merged.mount: Deactivated successfully.
Jan 27 08:44:29 compute-0 ceph-mon[74357]: pgmap v610: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:29 compute-0 podman[203763]: 2026-01-27 08:44:29.021052528 +0000 UTC m=+1.252363618 container remove 3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:29 compute-0 systemd[1]: libpod-conmon-3faaa9479bab617cae46dd80185eb55862274893d0e93292acfcd396b8ec2b40.scope: Deactivated successfully.
Jan 27 08:44:29 compute-0 sudo[203305]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:29 compute-0 python3.9[204069]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503467.9288435-1646-93860400942025/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:29 compute-0 sudo[204082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:29 compute-0 sudo[204082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:29 compute-0 sudo[204082]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:29 compute-0 sudo[204065]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:29 compute-0 sudo[204107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:44:29 compute-0 sudo[204107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:29 compute-0 sudo[204107]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:29 compute-0 sudo[204155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:29 compute-0 sudo[204155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:29 compute-0 sudo[204155]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:29 compute-0 sudo[204189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:44:29 compute-0 sudo[204189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:29.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:29 compute-0 sudo[204383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyvptkqniiordyavfyywiqrrirpbmtuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503469.3018954-1646-17444612400594/AnsiballZ_stat.py'
Jan 27 08:44:29 compute-0 sudo[204383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.656812625 +0000 UTC m=+0.043510617 container create 575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:29 compute-0 systemd[1]: Started libpod-conmon-575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3.scope.
Jan 27 08:44:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.641097623 +0000 UTC m=+0.027795635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.736820762 +0000 UTC m=+0.123518774 container init 575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.745464759 +0000 UTC m=+0.132162741 container start 575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.748380269 +0000 UTC m=+0.135078261 container attach 575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:44:29 compute-0 admiring_mendeleev[204390]: 167 167
Jan 27 08:44:29 compute-0 systemd[1]: libpod-575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3.scope: Deactivated successfully.
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.750485926 +0000 UTC m=+0.137183918 container died 575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36981f5e58dc45008fd79d0ae885cb1ad94b11c07b5aad17b7885edda3806f7-merged.mount: Deactivated successfully.
Jan 27 08:44:29 compute-0 podman[204346]: 2026-01-27 08:44:29.782603778 +0000 UTC m=+0.169301770 container remove 575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:44:29 compute-0 systemd[1]: libpod-conmon-575a9b2c7e87fb3d3ca934ea26061b58862363d6c6756c1f02208bfe97321db3.scope: Deactivated successfully.
Jan 27 08:44:29 compute-0 python3.9[204387]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:29 compute-0 sudo[204383]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:30.003766381 +0000 UTC m=+0.056804240 container create bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:30 compute-0 systemd[1]: Started libpod-conmon-bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0.scope.
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:29.980232824 +0000 UTC m=+0.033270713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:44:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb312e8a0a8d6b9ee32d41067dc9127e1dfc860b14880e45bac9ed908aa5555/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb312e8a0a8d6b9ee32d41067dc9127e1dfc860b14880e45bac9ed908aa5555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb312e8a0a8d6b9ee32d41067dc9127e1dfc860b14880e45bac9ed908aa5555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb312e8a0a8d6b9ee32d41067dc9127e1dfc860b14880e45bac9ed908aa5555/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:30.100348603 +0000 UTC m=+0.153386482 container init bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:30.10971254 +0000 UTC m=+0.162750349 container start bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:30.11336312 +0000 UTC m=+0.166400999 container attach bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:30 compute-0 sudo[204556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfbjjgdblmztfnqydanltjislnusdrkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503469.3018954-1646-17444612400594/AnsiballZ_copy.py'
Jan 27 08:44:30 compute-0 sudo[204556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:30 compute-0 python3.9[204558]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503469.3018954-1646-17444612400594/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:30 compute-0 sudo[204556]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:30 compute-0 awesome_kirch[204491]: {
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:     "0": [
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:         {
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "devices": [
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "/dev/loop3"
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             ],
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "lv_name": "ceph_lv0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "lv_size": "7511998464",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "name": "ceph_lv0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "tags": {
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.cluster_name": "ceph",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.crush_device_class": "",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.encrypted": "0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.osd_id": "0",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.type": "block",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:                 "ceph.vdo": "0"
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             },
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "type": "block",
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:             "vg_name": "ceph_vg0"
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:         }
Jan 27 08:44:30 compute-0 awesome_kirch[204491]:     ]
Jan 27 08:44:30 compute-0 awesome_kirch[204491]: }
Jan 27 08:44:30 compute-0 systemd[1]: libpod-bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0.scope: Deactivated successfully.
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:30.9134729 +0000 UTC m=+0.966510699 container died bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb312e8a0a8d6b9ee32d41067dc9127e1dfc860b14880e45bac9ed908aa5555-merged.mount: Deactivated successfully.
Jan 27 08:44:30 compute-0 podman[204438]: 2026-01-27 08:44:30.985194229 +0000 UTC m=+1.038232028 container remove bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:30 compute-0 systemd[1]: libpod-conmon-bdfe62c32101a19bc25cf0de907a0e6b110f49fdcc38a0c55cc3903c4934ffc0.scope: Deactivated successfully.
Jan 27 08:44:30 compute-0 sudo[204727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapskdwzfnztkgopgairstphcjulpukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503470.6648371-1646-213679768178187/AnsiballZ_stat.py'
Jan 27 08:44:31 compute-0 sudo[204727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:31 compute-0 sudo[204189]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:31 compute-0 sudo[204730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:31 compute-0 sudo[204730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:31 compute-0 sudo[204730]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:31 compute-0 ceph-mon[74357]: pgmap v611: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:31 compute-0 sudo[204756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:44:31 compute-0 sudo[204756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:31 compute-0 sudo[204756]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:31 compute-0 sudo[204781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:31 compute-0 sudo[204781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:31 compute-0 sudo[204781]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:31 compute-0 python3.9[204729]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:31 compute-0 sudo[204727]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:31 compute-0 sudo[204806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:44:31 compute-0 sudo[204806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:31.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:31 compute-0 sudo[204988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfylqkfqvkpyrnxzbcgszqqttyezbqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503470.6648371-1646-213679768178187/AnsiballZ_copy.py'
Jan 27 08:44:31 compute-0 sudo[204988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.686131715 +0000 UTC m=+0.055589197 container create 00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:44:31 compute-0 systemd[1]: Started libpod-conmon-00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a.scope.
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.653857539 +0000 UTC m=+0.023315001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:44:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:44:31 compute-0 python3.9[204994]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503470.6648371-1646-213679768178187/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.783441397 +0000 UTC m=+0.152898889 container init 00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.792815274 +0000 UTC m=+0.162272736 container start 00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.797196415 +0000 UTC m=+0.166653877 container attach 00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 08:44:31 compute-0 vibrant_mcnulty[205011]: 167 167
Jan 27 08:44:31 compute-0 systemd[1]: libpod-00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a.scope: Deactivated successfully.
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.805676768 +0000 UTC m=+0.175134220 container died 00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:44:31 compute-0 sudo[204988]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1d8b629b33162c68fdfc1b13358e2b8ba62fd0b378822757841992d2006b8ad-merged.mount: Deactivated successfully.
Jan 27 08:44:31 compute-0 podman[204995]: 2026-01-27 08:44:31.846409086 +0000 UTC m=+0.215866548 container remove 00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:44:31 compute-0 systemd[1]: libpod-conmon-00eea81b237f317e29e4d5e788c74c5441c0d935b7be144211228da102c7461a.scope: Deactivated successfully.
Jan 27 08:44:32 compute-0 podman[205082]: 2026-01-27 08:44:32.018253925 +0000 UTC m=+0.051512455 container create 6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:44:32 compute-0 systemd[1]: Started libpod-conmon-6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b.scope.
Jan 27 08:44:32 compute-0 podman[205082]: 2026-01-27 08:44:31.991818478 +0000 UTC m=+0.025077018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:44:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd8b5f7522e575085083a258a91dcd8ab9e25254a399baec681ba962c6b75cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd8b5f7522e575085083a258a91dcd8ab9e25254a399baec681ba962c6b75cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd8b5f7522e575085083a258a91dcd8ab9e25254a399baec681ba962c6b75cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd8b5f7522e575085083a258a91dcd8ab9e25254a399baec681ba962c6b75cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:32 compute-0 podman[205082]: 2026-01-27 08:44:32.115746462 +0000 UTC m=+0.149004992 container init 6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gagarin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:44:32 compute-0 podman[205082]: 2026-01-27 08:44:32.122537538 +0000 UTC m=+0.155796068 container start 6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:44:32 compute-0 podman[205082]: 2026-01-27 08:44:32.126337563 +0000 UTC m=+0.159596083 container attach 6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:44:32 compute-0 sudo[205205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eegdgumyxrgqsnqxcmirtgjadoqoiqtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503471.9574816-1646-222665197391348/AnsiballZ_stat.py'
Jan 27 08:44:32 compute-0 sudo[205205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:32 compute-0 python3.9[205207]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:32 compute-0 sudo[205205]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:32 compute-0 sudo[205341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gozigwrwymtdgpdrsjdvbajddgwturpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503471.9574816-1646-222665197391348/AnsiballZ_copy.py'
Jan 27 08:44:32 compute-0 sudo[205341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]: {
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:         "osd_id": 0,
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:         "type": "bluestore"
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]:     }
Jan 27 08:44:32 compute-0 eloquent_gagarin[205132]: }
Jan 27 08:44:32 compute-0 systemd[1]: libpod-6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b.scope: Deactivated successfully.
Jan 27 08:44:32 compute-0 podman[205082]: 2026-01-27 08:44:32.978504571 +0000 UTC m=+1.011763131 container died 6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfd8b5f7522e575085083a258a91dcd8ab9e25254a399baec681ba962c6b75cd-merged.mount: Deactivated successfully.
Jan 27 08:44:33 compute-0 podman[205082]: 2026-01-27 08:44:33.046462917 +0000 UTC m=+1.079721447 container remove 6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:33 compute-0 systemd[1]: libpod-conmon-6314cd4a0c51447edf0421cc4680f46ad91fa3edcc724da97bb052bc18959e9b.scope: Deactivated successfully.
Jan 27 08:44:33 compute-0 sudo[204806]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:44:33 compute-0 python3.9[205346]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769503471.9574816-1646-222665197391348/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:33 compute-0 sudo[205341]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:44:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:44:33 compute-0 ceph-mon[74357]: pgmap v612: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:44:33 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 644b1e62-44cf-4cb8-8109-13625321252e does not exist
Jan 27 08:44:33 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5662c41f-c6c9-4ea9-ae6e-159be304ed1e does not exist
Jan 27 08:44:33 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 145a61a9-8d60-4a06-a3a0-044f932d44b6 does not exist
Jan 27 08:44:33 compute-0 sudo[205387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:33 compute-0 sudo[205387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:33 compute-0 sudo[205387]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:33 compute-0 sudo[205412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:44:33 compute-0 sudo[205412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:33 compute-0 sudo[205412]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:34 compute-0 sudo[205562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okzyhyqoduywktrearghzurzpvnqntke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503473.7786448-1985-139265495464348/AnsiballZ_command.py'
Jan 27 08:44:34 compute-0 sudo[205562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:34 compute-0 python3.9[205564]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 27 08:44:34 compute-0 sudo[205562]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:34 compute-0 ceph-mon[74357]: pgmap v613: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:44:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:44:34 compute-0 sudo[205715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjhryxemtojlpjrkianpjdyknsjzyxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503474.576414-2012-222182548760294/AnsiballZ_file.py'
Jan 27 08:44:34 compute-0 sudo[205715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:35 compute-0 python3.9[205717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:35 compute-0 sudo[205715]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:35.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:35 compute-0 sudo[205868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rumuacnrerxmhaifpdlggjivqhrczctg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503475.2964523-2012-114602004635144/AnsiballZ_file.py'
Jan 27 08:44:35 compute-0 sudo[205868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:35 compute-0 python3.9[205870]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:35 compute-0 sudo[205868]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:36 compute-0 sudo[206020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdvpqhbycmuoabiqllxdstngzrdoeeld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503475.8886473-2012-278178127943478/AnsiballZ_file.py'
Jan 27 08:44:36 compute-0 sudo[206020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:36 compute-0 python3.9[206022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:36 compute-0 sudo[206020]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:36 compute-0 ceph-mon[74357]: pgmap v614: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:36 compute-0 sudo[206172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhnwmwmtaqsinbixavcvqlncubrnbxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503476.518529-2012-46709135270102/AnsiballZ_file.py'
Jan 27 08:44:36 compute-0 sudo[206172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:37 compute-0 python3.9[206174]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:37 compute-0 sudo[206172]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:37.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:37 compute-0 sudo[206325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjejhuszvryamuakyddjnkbzfzjrcibu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503477.1929684-2012-189602342288375/AnsiballZ_file.py'
Jan 27 08:44:37 compute-0 sudo[206325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:37.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:37 compute-0 python3.9[206327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:37 compute-0 sudo[206325]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:38 compute-0 sudo[206477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jepdrvzfgghrdbcyotqnoshlflzbhddx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503477.839012-2012-107750770863667/AnsiballZ_file.py'
Jan 27 08:44:38 compute-0 sudo[206477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:38 compute-0 python3.9[206479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:38 compute-0 sudo[206477]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:38 compute-0 ceph-mon[74357]: pgmap v615: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:38 compute-0 sudo[206629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whuoqgijmovaqlwqcbguxglkpfgpthlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503478.459652-2012-147913484287351/AnsiballZ_file.py'
Jan 27 08:44:38 compute-0 sudo[206629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:38 compute-0 python3.9[206631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:38 compute-0 sudo[206629]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:39 compute-0 sudo[206680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:39 compute-0 sudo[206680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:39 compute-0 sudo[206680]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:39 compute-0 sudo[206718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:39 compute-0 sudo[206718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:39 compute-0 sudo[206718]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:39 compute-0 sudo[206832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjdxjlkrtanuvjpeqwjuzspseflinjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503479.075073-2012-65086425763277/AnsiballZ_file.py'
Jan 27 08:44:39 compute-0 sudo[206832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:39.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:39 compute-0 python3.9[206834]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:39 compute-0 sudo[206832]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:39.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:39 compute-0 sudo[206984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aepldxybtdbqapzwdsmgjxmjpqthqbnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503479.668744-2012-35220798754938/AnsiballZ_file.py'
Jan 27 08:44:39 compute-0 sudo[206984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:40 compute-0 python3.9[206986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:40 compute-0 sudo[206984]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:40 compute-0 sudo[207136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tghhifgbjpymtfkrolrxkahebljbwozt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503480.3394718-2012-45223654589128/AnsiballZ_file.py'
Jan 27 08:44:40 compute-0 sudo[207136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:40 compute-0 ceph-mon[74357]: pgmap v616: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:40 compute-0 python3.9[207138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:40 compute-0 sudo[207136]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:41 compute-0 sudo[207289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkqxpjyoogjklrhqmcodkemkxfapnwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503480.9172666-2012-174242242832712/AnsiballZ_file.py'
Jan 27 08:44:41 compute-0 sudo[207289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:41 compute-0 python3.9[207291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:41.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:41 compute-0 sudo[207289]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:41.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:41 compute-0 sudo[207441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtzofftsrlpserzgllihvcjrenclwkrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503481.5449016-2012-179453870740843/AnsiballZ_file.py'
Jan 27 08:44:41 compute-0 sudo[207441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:41 compute-0 python3.9[207443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:42 compute-0 sudo[207441]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:42 compute-0 sudo[207593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcnyqpzctpnhmntcrggiuttackdixgsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503482.154245-2012-277873694565247/AnsiballZ_file.py'
Jan 27 08:44:42 compute-0 sudo[207593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:42 compute-0 python3.9[207595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:42 compute-0 ceph-mon[74357]: pgmap v617: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:42 compute-0 sudo[207593]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:43 compute-0 sudo[207746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jedvcvlmhstqiidxolsyctpowsvyedth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503482.8494544-2012-153241357192598/AnsiballZ_file.py'
Jan 27 08:44:43 compute-0 sudo[207746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:43 compute-0 python3.9[207748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:43 compute-0 sudo[207746]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:43.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:43.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:43 compute-0 sudo[207898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sslekexscsgdhgmbrwngtsjdoqepbgqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503483.7278657-2309-215311466448684/AnsiballZ_stat.py'
Jan 27 08:44:43 compute-0 sudo[207898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:44 compute-0 python3.9[207900]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:44 compute-0 sudo[207898]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:44 compute-0 sudo[208021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjnwguuzncrvptaziptchtnvckbtuqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503483.7278657-2309-215311466448684/AnsiballZ_copy.py'
Jan 27 08:44:44 compute-0 sudo[208021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:44 compute-0 python3.9[208023]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503483.7278657-2309-215311466448684/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:44 compute-0 sudo[208021]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:44 compute-0 ceph-mon[74357]: pgmap v618: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:45 compute-0 sudo[208174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uesympoamuhonbjcdokjipelwbaiqncp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503484.876171-2309-273543761829144/AnsiballZ_stat.py'
Jan 27 08:44:45 compute-0 sudo[208174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:45 compute-0 python3.9[208176]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:45 compute-0 sudo[208174]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:45.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:44:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:44:45 compute-0 sudo[208297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etpcmkvecvmnsmamdjlhakuojiktjsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503484.876171-2309-273543761829144/AnsiballZ_copy.py'
Jan 27 08:44:45 compute-0 sudo[208297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:45 compute-0 python3.9[208299]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503484.876171-2309-273543761829144/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:45 compute-0 sudo[208297]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:46 compute-0 sudo[208449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfknabdggscexxesixkgqrgetllmbrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503486.0078905-2309-74391484925103/AnsiballZ_stat.py'
Jan 27 08:44:46 compute-0 sudo[208449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:46 compute-0 python3.9[208451]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:46 compute-0 sudo[208449]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:46 compute-0 ceph-mon[74357]: pgmap v619: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:46 compute-0 sudo[208572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqaxduaxidacudkwvexsxtchupgiaphw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503486.0078905-2309-74391484925103/AnsiballZ_copy.py'
Jan 27 08:44:46 compute-0 sudo[208572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:47 compute-0 python3.9[208574]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503486.0078905-2309-74391484925103/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:47 compute-0 sudo[208572]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:47 compute-0 sudo[208725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elzftzmsvcvzjswnkkheqjylhojirnbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503487.257826-2309-206074395165679/AnsiballZ_stat.py'
Jan 27 08:44:47 compute-0 sudo[208725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:47.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:47 compute-0 python3.9[208727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:47 compute-0 sudo[208725]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:48 compute-0 sudo[208848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpdxsejbmcstflwutmariqywccoacqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503487.257826-2309-206074395165679/AnsiballZ_copy.py'
Jan 27 08:44:48 compute-0 sudo[208848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:48 compute-0 python3.9[208850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503487.257826-2309-206074395165679/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:48 compute-0 sudo[208848]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:48 compute-0 sudo[209000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuprhprgskkjkpqxvswmgfpaeramerrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503488.4839518-2309-229530632848133/AnsiballZ_stat.py'
Jan 27 08:44:48 compute-0 sudo[209000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:48 compute-0 ceph-mon[74357]: pgmap v620: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:49 compute-0 python3.9[209002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:49 compute-0 sudo[209000]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:49 compute-0 podman[209027]: 2026-01-27 08:44:49.265624828 +0000 UTC m=+0.080843501 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:44:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:49.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:49 compute-0 sudo[209150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dygfoybdystmcfavkoiyoogqvqdjslbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503488.4839518-2309-229530632848133/AnsiballZ_copy.py'
Jan 27 08:44:49 compute-0 sudo[209150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:49.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:49 compute-0 python3.9[209152]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503488.4839518-2309-229530632848133/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:49 compute-0 sudo[209150]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:50 compute-0 sudo[209302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlupmihhzkcazwsgutvuzbmriibqhbns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503489.969711-2309-50800015387628/AnsiballZ_stat.py'
Jan 27 08:44:50 compute-0 sudo[209302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:50 compute-0 python3.9[209304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:50 compute-0 sudo[209302]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:50 compute-0 sudo[209425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxqovvahhnxufkshfabtbditzzekpeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503489.969711-2309-50800015387628/AnsiballZ_copy.py'
Jan 27 08:44:50 compute-0 sudo[209425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:51 compute-0 ceph-mon[74357]: pgmap v621: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:51 compute-0 python3.9[209427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503489.969711-2309-50800015387628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:51 compute-0 sudo[209425]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 27 08:44:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:51.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 27 08:44:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:51.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:51 compute-0 sudo[209578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzpfbxnpfbzdxmzxaeuuruvtarngdyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503491.310841-2309-151277128271117/AnsiballZ_stat.py'
Jan 27 08:44:51 compute-0 sudo[209578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:51 compute-0 python3.9[209580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:51 compute-0 sudo[209578]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:52 compute-0 sudo[209701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzuiyftdcdaybwgnermywuasyabaquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503491.310841-2309-151277128271117/AnsiballZ_copy.py'
Jan 27 08:44:52 compute-0 sudo[209701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:52 compute-0 ceph-mon[74357]: pgmap v622: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:52 compute-0 python3.9[209703]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503491.310841-2309-151277128271117/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:52 compute-0 sudo[209701]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:53 compute-0 sudo[209854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeytxkqsfmvobiomcftgykfxarnnabfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503492.7943347-2309-173696816360499/AnsiballZ_stat.py'
Jan 27 08:44:53 compute-0 sudo[209854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:53 compute-0 python3.9[209856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:53 compute-0 sudo[209854]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:53.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:53 compute-0 sudo[209977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rczvbfoecolqzoaqxrvzuxqmxxtnhelh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503492.7943347-2309-173696816360499/AnsiballZ_copy.py'
Jan 27 08:44:53 compute-0 sudo[209977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:53 compute-0 python3.9[209979]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503492.7943347-2309-173696816360499/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:53 compute-0 sudo[209977]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:44:54.227 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:44:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:44:54.227 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:44:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:44:54.227 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:44:54 compute-0 sudo[210129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjvnxotradjkgqexqpslqvkwxjnblfcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503494.0737772-2309-234886795022980/AnsiballZ_stat.py'
Jan 27 08:44:54 compute-0 sudo[210129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:54 compute-0 python3.9[210131]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:54 compute-0 sudo[210129]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:54 compute-0 ceph-mon[74357]: pgmap v623: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:54 compute-0 sudo[210252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkyycqihiabhcdpysrobkeuadlzkwrgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503494.0737772-2309-234886795022980/AnsiballZ_copy.py'
Jan 27 08:44:54 compute-0 sudo[210252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:55 compute-0 python3.9[210254]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503494.0737772-2309-234886795022980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:55 compute-0 sudo[210252]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:55 compute-0 sudo[210405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocrvojwfnkyrykhwscklblvfskmsonsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503495.5291448-2309-195432041648984/AnsiballZ_stat.py'
Jan 27 08:44:55 compute-0 sudo[210405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:56 compute-0 python3.9[210407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:56 compute-0 sudo[210405]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:56 compute-0 sudo[210528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duebsetfpezggpvfnddlnwvczilfwdpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503495.5291448-2309-195432041648984/AnsiballZ_copy.py'
Jan 27 08:44:56 compute-0 sudo[210528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:56 compute-0 python3.9[210530]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503495.5291448-2309-195432041648984/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:56 compute-0 sudo[210528]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:56 compute-0 ceph-mon[74357]: pgmap v624: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:56 compute-0 sudo[210692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htfqzwkexcvskwvxfbkmtupijbooajeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503496.647926-2309-144787017938486/AnsiballZ_stat.py'
Jan 27 08:44:56 compute-0 sudo[210692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:56 compute-0 podman[210654]: 2026-01-27 08:44:56.940041503 +0000 UTC m=+0.051117908 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:44:57 compute-0 python3.9[210698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:57 compute-0 sudo[210692]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:57 compute-0 sudo[210822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udefawtoqsnvmzdsmpuadyzbwqodjktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503496.647926-2309-144787017938486/AnsiballZ_copy.py'
Jan 27 08:44:57 compute-0 sudo[210822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:44:57 compute-0 python3.9[210824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503496.647926-2309-144787017938486/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:57.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:57 compute-0 sudo[210822]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:58 compute-0 sudo[210974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvhvzblaqpymnjvixbrdzqjgfhkfeeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503497.7726548-2309-46929459305183/AnsiballZ_stat.py'
Jan 27 08:44:58 compute-0 sudo[210974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:58 compute-0 python3.9[210976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:58 compute-0 sudo[210974]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:58 compute-0 sudo[211097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dehnskrnoosqegcmntmihiqeehgbjtbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503497.7726548-2309-46929459305183/AnsiballZ_copy.py'
Jan 27 08:44:58 compute-0 sudo[211097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:58 compute-0 python3.9[211099]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503497.7726548-2309-46929459305183/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:44:58 compute-0 ceph-mon[74357]: pgmap v625: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:58 compute-0 sudo[211097]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:59 compute-0 sudo[211220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:59 compute-0 sudo[211220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:59 compute-0 sudo[211220]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:44:59 compute-0 sudo[211296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvuvazwrhmwqvmqukthxjdyixuknduz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503498.9736557-2309-209916400881609/AnsiballZ_stat.py'
Jan 27 08:44:59 compute-0 sudo[211296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:44:59 compute-0 sudo[211259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:44:59 compute-0 sudo[211259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:44:59 compute-0 sudo[211259]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:44:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:59 compute-0 python3.9[211301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:44:59 compute-0 sudo[211296]: pam_unix(sudo:session): session closed for user root
Jan 27 08:44:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:44:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:44:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:44:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:44:59 compute-0 sudo[211423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnqhsohenplucsxdyftovripyfffqyhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503498.9736557-2309-209916400881609/AnsiballZ_copy.py'
Jan 27 08:44:59 compute-0 sudo[211423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:00 compute-0 python3.9[211425]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503498.9736557-2309-209916400881609/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:00 compute-0 sudo[211423]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:00 compute-0 sudo[211575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqagdwaibrcqscnolxybdfzhjmqyzkqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503500.2130964-2309-41091092434407/AnsiballZ_stat.py'
Jan 27 08:45:00 compute-0 sudo[211575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:00 compute-0 python3.9[211577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:00 compute-0 sudo[211575]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:00 compute-0 ceph-mon[74357]: pgmap v626: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:01 compute-0 sudo[211698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxwazmgykoliquhgskywfqpmuyepxkgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503500.2130964-2309-41091092434407/AnsiballZ_copy.py'
Jan 27 08:45:01 compute-0 sudo[211698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:01 compute-0 python3.9[211701]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503500.2130964-2309-41091092434407/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:01 compute-0 sudo[211698]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:02 compute-0 python3.9[211851]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:02 compute-0 ceph-mon[74357]: pgmap v627: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:02 compute-0 sudo[212004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqcdrskarqqufhdfrhhtlvtodrtoqtmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503502.462754-2927-71993887243122/AnsiballZ_seboolean.py'
Jan 27 08:45:02 compute-0 sudo[212004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:03 compute-0 python3.9[212006]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 27 08:45:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 27 08:45:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:03.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 27 08:45:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 27 08:45:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:03.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 27 08:45:04 compute-0 sudo[212004]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:04 compute-0 ceph-mon[74357]: pgmap v628: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:05.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:05 compute-0 sudo[212162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okhkgmhvpimuadtfefdljujyzfhfmnpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503505.29019-2951-185656340473863/AnsiballZ_copy.py'
Jan 27 08:45:05 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 27 08:45:05 compute-0 sudo[212162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:05.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:05 compute-0 python3.9[212164]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:05 compute-0 sudo[212162]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:06 compute-0 sudo[212314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teuouzuqcpaqimqnvaiyjboihbcpfnhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503505.9131644-2951-224780848856072/AnsiballZ_copy.py'
Jan 27 08:45:06 compute-0 sudo[212314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:06 compute-0 python3.9[212316]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:06 compute-0 sudo[212314]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:06 compute-0 sudo[212466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyiymupgicqnhrxlilpmiqjpsxvqboua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503506.6113708-2951-97029966652205/AnsiballZ_copy.py'
Jan 27 08:45:06 compute-0 sudo[212466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:07 compute-0 ceph-mon[74357]: pgmap v629: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:07 compute-0 python3.9[212468]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:07 compute-0 sudo[212466]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:07.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:07 compute-0 sudo[212619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymshuvfvtedaqdkfmghszlxrimidtaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503507.2790902-2951-196750904039492/AnsiballZ_copy.py'
Jan 27 08:45:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:07 compute-0 sudo[212619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:07.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:07 compute-0 python3.9[212621]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:07 compute-0 sudo[212619]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:08 compute-0 sudo[212771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovhfkokeksobaeqnptzfbwbofajmellp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503507.9579327-2951-74781606673599/AnsiballZ_copy.py'
Jan 27 08:45:08 compute-0 sudo[212771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:08 compute-0 python3.9[212773]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:08 compute-0 sudo[212771]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:09 compute-0 sudo[212923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryaawinqgmepawrbbigwkugwlcgzagsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503508.7502575-3059-60797460160161/AnsiballZ_copy.py'
Jan 27 08:45:09 compute-0 sudo[212923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:09 compute-0 ceph-mon[74357]: pgmap v630: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:09 compute-0 python3.9[212926]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:09 compute-0 sudo[212923]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:09.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:09 compute-0 sudo[213076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmyjpztfrajlcayyxhpvhgfqxmqjfqdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503509.4646003-3059-144515967132093/AnsiballZ_copy.py'
Jan 27 08:45:09 compute-0 sudo[213076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:09 compute-0 python3.9[213078]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:09 compute-0 sudo[213076]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:10 compute-0 sudo[213228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgawzzesdidjxgjcgmxuvaphbqbxfysy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503510.0964174-3059-37188048842447/AnsiballZ_copy.py'
Jan 27 08:45:10 compute-0 sudo[213228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:10 compute-0 python3.9[213230]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:10 compute-0 sudo[213228]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:10 compute-0 sudo[213380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpnxtoixnngatouzjlycxxxvditeeug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503510.6854513-3059-61777519232118/AnsiballZ_copy.py'
Jan 27 08:45:10 compute-0 sudo[213380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:11 compute-0 python3.9[213382]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:11 compute-0 sudo[213380]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:11 compute-0 ceph-mon[74357]: pgmap v631: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:11 compute-0 sudo[213533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpxlghhjepjlogrhpcymxowljafkyxgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503511.2681-3059-243867318899304/AnsiballZ_copy.py'
Jan 27 08:45:11 compute-0 sudo[213533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:11.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:11 compute-0 python3.9[213535]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:11 compute-0 sudo[213533]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:12 compute-0 sudo[213685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwqsanmgkvouchglpelrqfyxehxmwvza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503512.0592003-3167-35146395108638/AnsiballZ_systemd.py'
Jan 27 08:45:12 compute-0 sudo[213685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:12 compute-0 python3.9[213687]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:45:12 compute-0 systemd[1]: Reloading.
Jan 27 08:45:12 compute-0 systemd-rc-local-generator[213715]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:12 compute-0 systemd-sysv-generator[213718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:12 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 27 08:45:13 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 27 08:45:13 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 27 08:45:13 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 27 08:45:13 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 27 08:45:13 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 27 08:45:13 compute-0 sudo[213685]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:13 compute-0 ceph-mon[74357]: pgmap v632: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:13 compute-0 sudo[213879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdlzbvonmgnzywfltdhwdgpqfjcavts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503513.3447168-3167-109930261474444/AnsiballZ_systemd.py'
Jan 27 08:45:13 compute-0 sudo[213879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:13.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:13 compute-0 python3.9[213881]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:45:13 compute-0 systemd[1]: Reloading.
Jan 27 08:45:13 compute-0 systemd-rc-local-generator[213908]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:14 compute-0 systemd-sysv-generator[213911]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:14 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 27 08:45:14 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 27 08:45:14 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 27 08:45:14 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 27 08:45:14 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 27 08:45:14 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 27 08:45:14 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 08:45:14 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 08:45:14 compute-0 sudo[213879]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:14 compute-0 ceph-mon[74357]: pgmap v633: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:14 compute-0 sudo[214095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqscbpcfrtfgusimspmyqyfjqfqmlewy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503514.448579-3167-205207362873577/AnsiballZ_systemd.py'
Jan 27 08:45:14 compute-0 sudo[214095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:45:14
Jan 27 08:45:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:45:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:45:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', 'images', '.rgw.root', 'backups']
Jan 27 08:45:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:45:15 compute-0 python3.9[214097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:45:15 compute-0 systemd[1]: Reloading.
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:15 compute-0 systemd-rc-local-generator[214125]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:15 compute-0 systemd-sysv-generator[214128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:45:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:15 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 27 08:45:15 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 27 08:45:15 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 27 08:45:15 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 27 08:45:15 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 27 08:45:15 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 08:45:15 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 08:45:15 compute-0 sudo[214095]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 27 08:45:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 27 08:45:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:15.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:15 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 27 08:45:15 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 27 08:45:15 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 27 08:45:15 compute-0 sudo[214315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjaapmqoxkiugwmacrbuikidcjykodw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503515.575641-3167-7682600210029/AnsiballZ_systemd.py'
Jan 27 08:45:15 compute-0 sudo[214315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:16 compute-0 python3.9[214317]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:45:16 compute-0 systemd[1]: Reloading.
Jan 27 08:45:16 compute-0 systemd-sysv-generator[214350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:16 compute-0 systemd-rc-local-generator[214347]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:16 compute-0 ceph-mon[74357]: pgmap v634: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:16 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 27 08:45:16 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 27 08:45:16 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 27 08:45:16 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 27 08:45:16 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 27 08:45:16 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 27 08:45:16 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 27 08:45:16 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 27 08:45:16 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 27 08:45:16 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 27 08:45:16 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 08:45:16 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 08:45:16 compute-0 sudo[214315]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:16 compute-0 setroubleshoot[214135]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1c567ed-f9c8-4dc2-94cb-128a4a0ee165
Jan 27 08:45:16 compute-0 setroubleshoot[214135]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 08:45:16 compute-0 setroubleshoot[214135]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1c567ed-f9c8-4dc2-94cb-128a4a0ee165
Jan 27 08:45:16 compute-0 setroubleshoot[214135]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 08:45:16 compute-0 sudo[214533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycscflflqboqfodcpxoazuneyzqxord ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503516.686313-3167-24172520016041/AnsiballZ_systemd.py'
Jan 27 08:45:16 compute-0 sudo[214533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:17 compute-0 python3.9[214535]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:45:17 compute-0 systemd[1]: Reloading.
Jan 27 08:45:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:17 compute-0 systemd-rc-local-generator[214561]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:17 compute-0 systemd-sysv-generator[214564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:17 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 27 08:45:17 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 27 08:45:17 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 27 08:45:17 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 27 08:45:17 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 27 08:45:17 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 27 08:45:17 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 27 08:45:17 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 27 08:45:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:17 compute-0 sudo[214533]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:18 compute-0 ceph-mon[74357]: pgmap v635: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:18 compute-0 sudo[214746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpjiuxcuxcstztutskhbydelkrptdaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503518.4667473-3278-62244921646148/AnsiballZ_file.py'
Jan 27 08:45:18 compute-0 sudo[214746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:18 compute-0 python3.9[214748]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:18 compute-0 sudo[214746]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:19 compute-0 sudo[214849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:19 compute-0 sudo[214849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:19 compute-0 sudo[214849]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:19.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:19 compute-0 sudo[214955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbrvmzbmuhedtbgaaktklinjkkwmarz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503519.1909738-3302-278794874814316/AnsiballZ_find.py'
Jan 27 08:45:19 compute-0 sudo[214905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:19 compute-0 sudo[214955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:19 compute-0 sudo[214905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:19 compute-0 sudo[214905]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:19 compute-0 podman[214897]: 2026-01-27 08:45:19.528527689 +0000 UTC m=+0.123432558 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:45:19 compute-0 python3.9[214964]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:45:19 compute-0 sudo[214955]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:19.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:20 compute-0 sudo[215124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pryqufnmaujzgsxyuzphbfdwfbjcsjyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503519.891324-3326-95266609126522/AnsiballZ_command.py'
Jan 27 08:45:20 compute-0 sudo[215124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:20 compute-0 python3.9[215126]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:20 compute-0 sudo[215124]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:20 compute-0 ceph-mon[74357]: pgmap v636: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:21 compute-0 python3.9[215280]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:45:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 27 08:45:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 27 08:45:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:22 compute-0 python3.9[215431]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:22 compute-0 ceph-mon[74357]: pgmap v637: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:22 compute-0 python3.9[215552]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503521.6339517-3383-212721955312592/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c3091653377445f309593e035dc162c22574e9d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:23 compute-0 sudo[215703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkvwdaxpvatadtohkjiwzyryuwccrpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503523.0194485-3428-227418000252966/AnsiballZ_command.py'
Jan 27 08:45:23 compute-0 sudo[215703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:23.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:23 compute-0 python3.9[215705]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 281e9bde-2795-59f4-98ac-90cf5b49a2de
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:23 compute-0 polkitd[43485]: Registered Authentication Agent for unix-process:215707:350822 (system bus name :1.2919 [pkttyagent --process 215707 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 27 08:45:23 compute-0 polkitd[43485]: Unregistered Authentication Agent for unix-process:215707:350822 (system bus name :1.2919, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 27 08:45:23 compute-0 polkitd[43485]: Registered Authentication Agent for unix-process:215706:350821 (system bus name :1.2920 [pkttyagent --process 215706 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 27 08:45:23 compute-0 polkitd[43485]: Unregistered Authentication Agent for unix-process:215706:350821 (system bus name :1.2920, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 27 08:45:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:23.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:23 compute-0 sudo[215703]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:45:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:45:24 compute-0 python3.9[215867]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:24 compute-0 ceph-mon[74357]: pgmap v638: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:24 compute-0 sudo[216017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osovyxveaysuflfnzdcyoonwrxcvqeat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503524.6693547-3476-2954860120299/AnsiballZ_command.py'
Jan 27 08:45:24 compute-0 sudo[216017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:25 compute-0 sudo[216017]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 27 08:45:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:25.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 27 08:45:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:25.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:25 compute-0 sudo[216171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regmlfgdxrpkjigmjlfcxepgujplxrrk ; FSID=281e9bde-2795-59f4-98ac-90cf5b49a2de KEY=AQDBdnhpAAAAABAAc3H+hLFskAdXtvnwUr6AEQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503525.4986143-3500-29384971454550/AnsiballZ_command.py'
Jan 27 08:45:25 compute-0 sudo[216171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:25 compute-0 polkitd[43485]: Registered Authentication Agent for unix-process:216174:351061 (system bus name :1.2923 [pkttyagent --process 216174 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 27 08:45:25 compute-0 polkitd[43485]: Unregistered Authentication Agent for unix-process:216174:351061 (system bus name :1.2923, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 27 08:45:26 compute-0 sudo[216171]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:26 compute-0 sudo[216329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbjjxznntntelwdbmqejdqzysjcplslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503526.3161075-3524-13997645837578/AnsiballZ_copy.py'
Jan 27 08:45:26 compute-0 sudo[216329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:26 compute-0 ceph-mon[74357]: pgmap v639: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:26 compute-0 python3.9[216331]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:26 compute-0 sudo[216329]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:26 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 27 08:45:26 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 27 08:45:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:27 compute-0 podman[216404]: 2026-01-27 08:45:27.313611779 +0000 UTC m=+0.111206878 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:45:27 compute-0 sudo[216500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmpgvxinbbrefmgbeodragerugmdimlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503527.1157784-3548-183710673547447/AnsiballZ_stat.py'
Jan 27 08:45:27 compute-0 sudo[216500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:27.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:27 compute-0 python3.9[216502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:27 compute-0 sudo[216500]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:27 compute-0 sudo[216623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mknsaknquygdddzybbicajkjczhwjkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503527.1157784-3548-183710673547447/AnsiballZ_copy.py'
Jan 27 08:45:27 compute-0 sudo[216623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:28 compute-0 python3.9[216625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503527.1157784-3548-183710673547447/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:28 compute-0 sudo[216623]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:28 compute-0 ceph-mon[74357]: pgmap v640: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:28 compute-0 sudo[216775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsndpjkijwcegwntihfahdcoceuucigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503528.7398744-3596-139920567113750/AnsiballZ_file.py'
Jan 27 08:45:28 compute-0 sudo[216775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:29 compute-0 python3.9[216777]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:29 compute-0 sudo[216775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 27 08:45:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 27 08:45:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:29.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:29 compute-0 sudo[216928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snjbyzplymtenhjtjmmqyfxmxxifghri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503529.4376307-3620-188764258901002/AnsiballZ_stat.py'
Jan 27 08:45:29 compute-0 sudo[216928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:29 compute-0 python3.9[216930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:29 compute-0 sudo[216928]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:30 compute-0 sudo[217006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxspsrbnzuljkfblzmlcclxvjvrxzbvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503529.4376307-3620-188764258901002/AnsiballZ_file.py'
Jan 27 08:45:30 compute-0 sudo[217006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:30 compute-0 python3.9[217008]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:30 compute-0 sudo[217006]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:30 compute-0 ceph-mon[74357]: pgmap v641: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:30 compute-0 sudo[217158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liiufgkflxsiunmsghriafkhevypzhgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503530.6681328-3656-64612309188680/AnsiballZ_stat.py'
Jan 27 08:45:30 compute-0 sudo[217158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:31 compute-0 python3.9[217160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:31 compute-0 sudo[217158]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:31 compute-0 sudo[217237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexxvvgxiuegyleyzusjvtkjzcaiqvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503530.6681328-3656-64612309188680/AnsiballZ_file.py'
Jan 27 08:45:31 compute-0 sudo[217237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:31.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:31 compute-0 python3.9[217239]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8pjf7ieo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:31 compute-0 sudo[217237]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:32 compute-0 sudo[217389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jumciomzuornwaxuuuvelpvjrywadcse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503531.8472502-3692-13817760165036/AnsiballZ_stat.py'
Jan 27 08:45:32 compute-0 sudo[217389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:32 compute-0 python3.9[217391]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:32 compute-0 sudo[217389]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:32 compute-0 sudo[217467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpffgvlopdjjjtyxetpxhjtrotvxsrys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503531.8472502-3692-13817760165036/AnsiballZ_file.py'
Jan 27 08:45:32 compute-0 sudo[217467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:32 compute-0 python3.9[217469]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:32 compute-0 sudo[217467]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:33 compute-0 ceph-mon[74357]: pgmap v642: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:33.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:33 compute-0 sudo[217620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfcgxtypmckifdriecuhkbfotswaytap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503533.221589-3731-274001630018326/AnsiballZ_command.py'
Jan 27 08:45:33 compute-0 sudo[217620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:33.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:33 compute-0 python3.9[217622]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:33 compute-0 sudo[217620]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 sudo[217671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:34 compute-0 sudo[217671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:34 compute-0 sudo[217671]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 sudo[217722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:45:34 compute-0 sudo[217722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:34 compute-0 sudo[217722]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 sudo[217750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:34 compute-0 sudo[217750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:34 compute-0 sudo[217750]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 sudo[217775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:45:34 compute-0 sudo[217775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:34 compute-0 ceph-mon[74357]: pgmap v643: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:34 compute-0 sudo[217886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxunaybuswetmjulovzkgosyzxhfeiz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503534.0024867-3755-237149431410317/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 08:45:34 compute-0 sudo[217886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:34 compute-0 python3[217888]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:45:34 compute-0 sudo[217886]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 sudo[217775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:45:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:45:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:45:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:45:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:45:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:45:34 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a7ce6ae9-d330-4cf8-b4a7-a20f6cde1304 does not exist
Jan 27 08:45:34 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3b544423-674f-4c58-b67a-13a9a8e61dab does not exist
Jan 27 08:45:34 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev ea46d7b0-ac93-4da9-80a8-1ad5900c5b59 does not exist
Jan 27 08:45:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:45:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:45:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:45:34 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:45:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:45:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:45:34 compute-0 sudo[217953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:34 compute-0 sudo[217953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:34 compute-0 sudo[217953]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:34 compute-0 sudo[218007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:45:34 compute-0 sudo[218007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:34 compute-0 sudo[218007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:35 compute-0 sudo[218032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:35 compute-0 sudo[218032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:35 compute-0 sudo[218032]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:35 compute-0 sudo[218080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:45:35 compute-0 sudo[218080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:35 compute-0 sudo[218156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyvnultnyrqzagnbxvqyszamguqshqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503534.86201-3779-144630007395756/AnsiballZ_stat.py'
Jan 27 08:45:35 compute-0 sudo[218156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:35 compute-0 python3.9[218158]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.407163446 +0000 UTC m=+0.048935477 container create 0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:45:35 compute-0 sudo[218156]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:45:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:45:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:45:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:45:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:45:35 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:45:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:35 compute-0 systemd[1]: Started libpod-conmon-0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade.scope.
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.381680754 +0000 UTC m=+0.023452885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:45:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.513529339 +0000 UTC m=+0.155301370 container init 0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.521949368 +0000 UTC m=+0.163721399 container start 0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.525943572 +0000 UTC m=+0.167715623 container attach 0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:35 compute-0 systemd[1]: libpod-0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade.scope: Deactivated successfully.
Jan 27 08:45:35 compute-0 naughty_allen[218229]: 167 167
Jan 27 08:45:35 compute-0 conmon[218229]: conmon 0d5d687eb0c35d66559c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade.scope/container/memory.events
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.529638329 +0000 UTC m=+0.171410360 container died 0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-02457cf0314b6c5ce9acb7fd24fa948afe9865928da0de123d1d4704f6a1ede2-merged.mount: Deactivated successfully.
Jan 27 08:45:35 compute-0 podman[218200]: 2026-01-27 08:45:35.573385693 +0000 UTC m=+0.215157724 container remove 0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:45:35 compute-0 systemd[1]: libpod-conmon-0d5d687eb0c35d66559c2becd3f36a46e954233d293e7cc7c842b2b482083ade.scope: Deactivated successfully.
Jan 27 08:45:35 compute-0 sudo[218309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpxvdweqzlpnxofsfrbxlyaktcabajrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503534.86201-3779-144630007395756/AnsiballZ_file.py'
Jan 27 08:45:35 compute-0 sudo[218309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:35.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:35 compute-0 podman[218317]: 2026-01-27 08:45:35.785900914 +0000 UTC m=+0.055235977 container create 3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:45:35 compute-0 python3.9[218311]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:35 compute-0 systemd[1]: Started libpod-conmon-3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7.scope.
Jan 27 08:45:35 compute-0 sudo[218309]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:35 compute-0 podman[218317]: 2026-01-27 08:45:35.759130591 +0000 UTC m=+0.028465734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:45:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1f50eac62107261f8dd5e08c1418705bb6b03512afd5b5a780493322e3d52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1f50eac62107261f8dd5e08c1418705bb6b03512afd5b5a780493322e3d52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1f50eac62107261f8dd5e08c1418705bb6b03512afd5b5a780493322e3d52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1f50eac62107261f8dd5e08c1418705bb6b03512afd5b5a780493322e3d52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1f50eac62107261f8dd5e08c1418705bb6b03512afd5b5a780493322e3d52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:35 compute-0 podman[218317]: 2026-01-27 08:45:35.891366595 +0000 UTC m=+0.160701678 container init 3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:45:35 compute-0 podman[218317]: 2026-01-27 08:45:35.904500765 +0000 UTC m=+0.173835818 container start 3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:45:35 compute-0 podman[218317]: 2026-01-27 08:45:35.909137625 +0000 UTC m=+0.178472698 container attach 3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:45:36 compute-0 sudo[218494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rncccdxctwwbavrnnclelbdlqrevwkwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503536.3398376-3815-77921346751973/AnsiballZ_stat.py'
Jan 27 08:45:36 compute-0 sudo[218494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:36 compute-0 recursing_yonath[218334]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:45:36 compute-0 recursing_yonath[218334]: --> relative data size: 1.0
Jan 27 08:45:36 compute-0 recursing_yonath[218334]: --> All data devices are unavailable
Jan 27 08:45:36 compute-0 systemd[1]: libpod-3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7.scope: Deactivated successfully.
Jan 27 08:45:36 compute-0 podman[218317]: 2026-01-27 08:45:36.751337152 +0000 UTC m=+1.020672215 container died 3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e1f50eac62107261f8dd5e08c1418705bb6b03512afd5b5a780493322e3d52-merged.mount: Deactivated successfully.
Jan 27 08:45:36 compute-0 podman[218317]: 2026-01-27 08:45:36.823560177 +0000 UTC m=+1.092895250 container remove 3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:45:36 compute-0 systemd[1]: libpod-conmon-3a672bebd59b30679448ddca746e8c51ab786c99ad06598b50cbf2643d32b2f7.scope: Deactivated successfully.
Jan 27 08:45:36 compute-0 sudo[218080]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:36 compute-0 sudo[218514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:36 compute-0 sudo[218514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:36 compute-0 python3.9[218499]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:36 compute-0 sudo[218514]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:36 compute-0 sudo[218494]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:36 compute-0 ceph-mon[74357]: pgmap v644: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:36 compute-0 sudo[218540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:45:36 compute-0 sudo[218540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:36 compute-0 sudo[218540]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:37 compute-0 sudo[218567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:37 compute-0 sudo[218567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:37 compute-0 sudo[218567]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:37 compute-0 sudo[218616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:45:37 compute-0 sudo[218616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:37 compute-0 sudo[218763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfdvxlaqqafhnhyjqbntmldozoqkwatg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503536.3398376-3815-77921346751973/AnsiballZ_copy.py'
Jan 27 08:45:37 compute-0 sudo[218763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 27 08:45:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:37.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.465656776 +0000 UTC m=+0.043521428 container create 8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:45:37 compute-0 systemd[1]: Started libpod-conmon-8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2.scope.
Jan 27 08:45:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.446390862 +0000 UTC m=+0.024255534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:45:37 compute-0 python3.9[218767]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503536.3398376-3815-77921346751973/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.561325867 +0000 UTC m=+0.139190539 container init 8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:45:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.575560024 +0000 UTC m=+0.153424676 container start 8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.579580398 +0000 UTC m=+0.157445060 container attach 8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:45:37 compute-0 nice_cartwright[218796]: 167 167
Jan 27 08:45:37 compute-0 sudo[218763]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:37 compute-0 systemd[1]: libpod-8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2.scope: Deactivated successfully.
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.582990549 +0000 UTC m=+0.160855191 container died 8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6b7424230b81c213525450a8eb815fb4e47eceb0798f74dee4299b9f9caf3cb-merged.mount: Deactivated successfully.
Jan 27 08:45:37 compute-0 podman[218780]: 2026-01-27 08:45:37.620861164 +0000 UTC m=+0.198725816 container remove 8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:45:37 compute-0 systemd[1]: libpod-conmon-8f84af2f912213d543a700c8a8fbc5c86d1aa2847a1e791dc57b5d7c274d22d2.scope: Deactivated successfully.
Jan 27 08:45:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:37 compute-0 podman[218867]: 2026-01-27 08:45:37.817231153 +0000 UTC m=+0.051029437 container create 3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lalande, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:45:37 compute-0 systemd[1]: Started libpod-conmon-3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35.scope.
Jan 27 08:45:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334b5d8c74b10523b7c4d708943dfe74c694615f1171c4da6f838b5e1fa1104f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:37 compute-0 podman[218867]: 2026-01-27 08:45:37.79762207 +0000 UTC m=+0.031420374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334b5d8c74b10523b7c4d708943dfe74c694615f1171c4da6f838b5e1fa1104f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334b5d8c74b10523b7c4d708943dfe74c694615f1171c4da6f838b5e1fa1104f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334b5d8c74b10523b7c4d708943dfe74c694615f1171c4da6f838b5e1fa1104f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:38 compute-0 podman[218867]: 2026-01-27 08:45:38.267520641 +0000 UTC m=+0.501318945 container init 3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 08:45:38 compute-0 podman[218867]: 2026-01-27 08:45:38.274949866 +0000 UTC m=+0.508748150 container start 3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 08:45:38 compute-0 podman[218867]: 2026-01-27 08:45:38.277956117 +0000 UTC m=+0.511754421 container attach 3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 08:45:38 compute-0 sudo[218991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywdutqekxcrrvxlpqbmhpaousblfndeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503537.7394965-3860-7861764113276/AnsiballZ_stat.py'
Jan 27 08:45:38 compute-0 sudo[218991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:38 compute-0 ceph-mon[74357]: pgmap v645: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:38 compute-0 python3.9[218993]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:38 compute-0 sudo[218991]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:38 compute-0 sudo[219069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdicvcilaneuerragobkzpjfdjreezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503537.7394965-3860-7861764113276/AnsiballZ_file.py'
Jan 27 08:45:38 compute-0 sudo[219069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:38 compute-0 python3.9[219071]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:38 compute-0 sudo[219069]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:38 compute-0 laughing_lalande[218913]: {
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:     "0": [
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:         {
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "devices": [
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "/dev/loop3"
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             ],
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "lv_name": "ceph_lv0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "lv_size": "7511998464",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "name": "ceph_lv0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "tags": {
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.cluster_name": "ceph",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.crush_device_class": "",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.encrypted": "0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.osd_id": "0",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.type": "block",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:                 "ceph.vdo": "0"
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             },
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "type": "block",
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:             "vg_name": "ceph_vg0"
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:         }
Jan 27 08:45:38 compute-0 laughing_lalande[218913]:     ]
Jan 27 08:45:38 compute-0 laughing_lalande[218913]: }
Jan 27 08:45:39 compute-0 systemd[1]: libpod-3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35.scope: Deactivated successfully.
Jan 27 08:45:39 compute-0 podman[218867]: 2026-01-27 08:45:39.001428158 +0000 UTC m=+1.235226442 container died 3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lalande, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-334b5d8c74b10523b7c4d708943dfe74c694615f1171c4da6f838b5e1fa1104f-merged.mount: Deactivated successfully.
Jan 27 08:45:39 compute-0 podman[218867]: 2026-01-27 08:45:39.060248103 +0000 UTC m=+1.294046387 container remove 3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 08:45:39 compute-0 systemd[1]: libpod-conmon-3304a3bf74eee5b742d61a7ec409afe5719a78a2e82a4f3e1446aef1f61fae35.scope: Deactivated successfully.
Jan 27 08:45:39 compute-0 sudo[218616]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 sudo[219113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:39 compute-0 sudo[219113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:39 compute-0 sudo[219113]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 sudo[219166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:45:39 compute-0 sudo[219166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:39 compute-0 sudo[219166]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 sudo[219214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:39 compute-0 sudo[219214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:39 compute-0 sudo[219214]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:39 compute-0 sudo[219262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:45:39 compute-0 sudo[219262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:39 compute-0 sudo[219337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmpwrqxcytvadnseuchldofarsbfvjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503539.1323707-3896-32375185236007/AnsiballZ_stat.py'
Jan 27 08:45:39 compute-0 sudo[219337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:39.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:39 compute-0 sudo[219352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:39 compute-0 sudo[219352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:39 compute-0 sudo[219352]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 python3.9[219339]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:39 compute-0 sudo[219391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:39 compute-0 sudo[219391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:39 compute-0 sudo[219391]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 sudo[219337]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.685539757 +0000 UTC m=+0.042137878 container create 3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:45:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:39.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:39 compute-0 systemd[1]: Started libpod-conmon-3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7.scope.
Jan 27 08:45:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.755140057 +0000 UTC m=+0.111738198 container init 3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.764079422 +0000 UTC m=+0.120677543 container start 3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.669134916 +0000 UTC m=+0.025733057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:45:39 compute-0 pensive_mccarthy[219470]: 167 167
Jan 27 08:45:39 compute-0 systemd[1]: libpod-3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7.scope: Deactivated successfully.
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.769325276 +0000 UTC m=+0.125923417 container attach 3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.769617814 +0000 UTC m=+0.126215935 container died 3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-64384d7f90159dee8a09ac8bf198a9c323ac3f959fd7fe563ace70479b819aca-merged.mount: Deactivated successfully.
Jan 27 08:45:39 compute-0 podman[219428]: 2026-01-27 08:45:39.802386324 +0000 UTC m=+0.158984445 container remove 3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:45:39 compute-0 systemd[1]: libpod-conmon-3094db3b3f0d6b10a238d9cc6b75f4770c349f0060a3acadc05c8bb94a9b39c7.scope: Deactivated successfully.
Jan 27 08:45:39 compute-0 sudo[219539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etmsumhcledshldvisgvbpuloslgwnpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503539.1323707-3896-32375185236007/AnsiballZ_file.py'
Jan 27 08:45:39 compute-0 sudo[219539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:39 compute-0 podman[219547]: 2026-01-27 08:45:39.952505634 +0000 UTC m=+0.038686833 container create 1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:40 compute-0 systemd[1]: Started libpod-conmon-1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba.scope.
Jan 27 08:45:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:45:40 compute-0 podman[219547]: 2026-01-27 08:45:39.936087633 +0000 UTC m=+0.022268842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e46103eb307652ecf93ea1634b4267641cf59f4138ed33052953ac662cb1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e46103eb307652ecf93ea1634b4267641cf59f4138ed33052953ac662cb1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e46103eb307652ecf93ea1634b4267641cf59f4138ed33052953ac662cb1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e46103eb307652ecf93ea1634b4267641cf59f4138ed33052953ac662cb1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:40 compute-0 podman[219547]: 2026-01-27 08:45:40.043259556 +0000 UTC m=+0.129440765 container init 1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:45:40 compute-0 podman[219547]: 2026-01-27 08:45:40.052439348 +0000 UTC m=+0.138620547 container start 1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:45:40 compute-0 podman[219547]: 2026-01-27 08:45:40.055754058 +0000 UTC m=+0.141935287 container attach 1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:45:40 compute-0 python3.9[219541]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:40 compute-0 sudo[219539]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:40 compute-0 ceph-mon[74357]: pgmap v646: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:40 compute-0 sudo[219727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhmksnowsxnvtmbuuakuiisjpgiarkqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503540.4221501-3932-20956873118998/AnsiballZ_stat.py'
Jan 27 08:45:40 compute-0 sudo[219727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:40 compute-0 youthful_robinson[219563]: {
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:         "osd_id": 0,
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:         "type": "bluestore"
Jan 27 08:45:40 compute-0 youthful_robinson[219563]:     }
Jan 27 08:45:40 compute-0 youthful_robinson[219563]: }
Jan 27 08:45:40 compute-0 systemd[1]: libpod-1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba.scope: Deactivated successfully.
Jan 27 08:45:40 compute-0 podman[219547]: 2026-01-27 08:45:40.899661313 +0000 UTC m=+0.985842512 container died 1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-170e46103eb307652ecf93ea1634b4267641cf59f4138ed33052953ac662cb1a-merged.mount: Deactivated successfully.
Jan 27 08:45:40 compute-0 podman[219547]: 2026-01-27 08:45:40.954954681 +0000 UTC m=+1.041135880 container remove 1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:45:40 compute-0 systemd[1]: libpod-conmon-1ded08b300be20e51a173cf6ecb261643179e6b51a9633cc9e83e957e43f22ba.scope: Deactivated successfully.
Jan 27 08:45:40 compute-0 sudo[219262]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:45:41 compute-0 python3.9[219730]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:45:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:45:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:45:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d6815f60-3826-4590-a955-fe14a4ec1b6e does not exist
Jan 27 08:45:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7bc1c548-9fcc-466e-80fe-58af7a274edf does not exist
Jan 27 08:45:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a762f8d9-31be-4bc2-8700-1eaece1c28ac does not exist
Jan 27 08:45:41 compute-0 sudo[219727]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:41 compute-0 sudo[219752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:41 compute-0 sudo[219752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:41 compute-0 sudo[219752]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:41 compute-0 sudo[219779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:45:41 compute-0 sudo[219779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:41 compute-0 sudo[219779]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:41 compute-0 sudo[219923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfwtpvlqhrsahnudisxpzbdjmfhdrwlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503540.4221501-3932-20956873118998/AnsiballZ_copy.py'
Jan 27 08:45:41 compute-0 sudo[219923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:41.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:41 compute-0 python3.9[219925]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769503540.4221501-3932-20956873118998/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:41 compute-0 sudo[219923]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:41.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:45:42 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:45:42 compute-0 sudo[220075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mohiuzwqvnfszyduzcuuiibiyiwnyjws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503541.887871-3977-27452963257028/AnsiballZ_file.py'
Jan 27 08:45:42 compute-0 sudo[220075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:42 compute-0 python3.9[220077]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:42 compute-0 sudo[220075]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.585636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503542585672, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1658, "num_deletes": 501, "total_data_size": 2531852, "memory_usage": 2577200, "flush_reason": "Manual Compaction"}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503542596521, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1458624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14079, "largest_seqno": 15736, "table_properties": {"data_size": 1453106, "index_size": 2273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16520, "raw_average_key_size": 19, "raw_value_size": 1439238, "raw_average_value_size": 1673, "num_data_blocks": 104, "num_entries": 860, "num_filter_entries": 860, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503397, "oldest_key_time": 1769503397, "file_creation_time": 1769503542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10958 microseconds, and 4706 cpu microseconds.
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.596587) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1458624 bytes OK
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.596619) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.598574) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.598589) EVENT_LOG_v1 {"time_micros": 1769503542598584, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.598616) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2523885, prev total WAL file size 2523885, number of live WAL files 2.
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.599577) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1424KB)], [32(10MB)]
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503542599643, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12578761, "oldest_snapshot_seqno": -1}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4194 keys, 8001651 bytes, temperature: kUnknown
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503542636722, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8001651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7972364, "index_size": 17721, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103649, "raw_average_key_size": 24, "raw_value_size": 7895194, "raw_average_value_size": 1882, "num_data_blocks": 747, "num_entries": 4194, "num_filter_entries": 4194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.637156) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8001651 bytes
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.638817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 338.1 rd, 215.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.6 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(14.1) write-amplify(5.5) OK, records in: 5143, records dropped: 949 output_compression: NoCompression
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.638836) EVENT_LOG_v1 {"time_micros": 1769503542638825, "job": 14, "event": "compaction_finished", "compaction_time_micros": 37201, "compaction_time_cpu_micros": 17935, "output_level": 6, "num_output_files": 1, "total_output_size": 8001651, "num_input_records": 5143, "num_output_records": 4194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503542639261, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503542641149, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.599477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.641226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.641233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.641235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.641236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:42 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:45:42.641239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:43 compute-0 sudo[220227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpspbttxtmrjcaelkctvmcvvpwhvpdgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503542.7273462-4001-85120361016214/AnsiballZ_command.py'
Jan 27 08:45:43 compute-0 sudo[220227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:43 compute-0 ceph-mon[74357]: pgmap v647: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:43 compute-0 python3.9[220229]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:43 compute-0 sudo[220227]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:43.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:43.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:44 compute-0 sudo[220383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqtsiesfaxemajfmazzimfielauxrntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503543.5440185-4025-31476972088619/AnsiballZ_blockinfile.py'
Jan 27 08:45:44 compute-0 sudo[220383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:44 compute-0 python3.9[220385]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:44 compute-0 sudo[220383]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:44 compute-0 sudo[220535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gidyppnanvigghzvrpqilofrtfcsxtaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503544.6682951-4052-65680758835524/AnsiballZ_command.py'
Jan 27 08:45:44 compute-0 sudo[220535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:45 compute-0 ceph-mon[74357]: pgmap v648: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:45 compute-0 python3.9[220537]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:45 compute-0 sudo[220535]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:45.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:45 compute-0 sudo[220689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruecvsjdwsoomniwhbsnejxcruhmffdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503545.4500964-4076-232876486933118/AnsiballZ_stat.py'
Jan 27 08:45:45 compute-0 sudo[220689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:46 compute-0 python3.9[220691]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:45:46 compute-0 sudo[220689]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:46 compute-0 sudo[220843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canoufeqesduffzqwjnesbwvrokvnqij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503546.2684927-4100-126873549759506/AnsiballZ_command.py'
Jan 27 08:45:46 compute-0 sudo[220843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:46 compute-0 python3.9[220845]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:45:46 compute-0 sudo[220843]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:47 compute-0 ceph-mon[74357]: pgmap v649: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:47.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:47 compute-0 sudo[220999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obdjtfzrvlyswxgdzyaxmtbylrjqtqqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503547.183268-4124-152069906190513/AnsiballZ_file.py'
Jan 27 08:45:47 compute-0 sudo[220999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:47 compute-0 python3.9[221001]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:47 compute-0 sudo[220999]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:47.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:48 compute-0 sudo[221151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzgrkcwoammqmvcohrlpvgtkkexuwgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503547.9657989-4148-249293164996273/AnsiballZ_stat.py'
Jan 27 08:45:48 compute-0 sudo[221151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:48 compute-0 python3.9[221153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:48 compute-0 sudo[221151]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:48 compute-0 sudo[221274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iormnneocncnclaswwautmpaekgdhnjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503547.9657989-4148-249293164996273/AnsiballZ_copy.py'
Jan 27 08:45:48 compute-0 sudo[221274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:48 compute-0 python3.9[221276]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503547.9657989-4148-249293164996273/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:48 compute-0 sudo[221274]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:49 compute-0 ceph-mon[74357]: pgmap v650: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:49.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:49 compute-0 sudo[221427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffplgsnanfbpphvhthuybyjiyrnwhyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503549.2397208-4193-254382599700975/AnsiballZ_stat.py'
Jan 27 08:45:49 compute-0 sudo[221427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:49 compute-0 podman[221429]: 2026-01-27 08:45:49.642201988 +0000 UTC m=+0.074866696 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:45:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:49.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:49 compute-0 python3.9[221430]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:49 compute-0 sudo[221427]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:50 compute-0 sudo[221577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jorbivenzbuotsdaodrmmhskebnbszrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503549.2397208-4193-254382599700975/AnsiballZ_copy.py'
Jan 27 08:45:50 compute-0 sudo[221577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:50 compute-0 python3.9[221579]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503549.2397208-4193-254382599700975/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:50 compute-0 sudo[221577]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:50 compute-0 ceph-mon[74357]: pgmap v651: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:51.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:51.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:52 compute-0 sudo[221730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqilmauoxumhrsyehilwxjndgbcytxfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503551.7912533-4238-108856913211284/AnsiballZ_stat.py'
Jan 27 08:45:52 compute-0 sudo[221730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:52 compute-0 python3.9[221732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:45:52 compute-0 sudo[221730]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:52 compute-0 sudo[221853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admrmejrgxsxirximujkxsplydhpmxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503551.7912533-4238-108856913211284/AnsiballZ_copy.py'
Jan 27 08:45:52 compute-0 sudo[221853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:52 compute-0 python3.9[221855]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503551.7912533-4238-108856913211284/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:45:52 compute-0 sudo[221853]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:53 compute-0 ceph-mon[74357]: pgmap v652: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:53 compute-0 sudo[222006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqtqckzhebrxqctmuuypbfejslbddmfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503553.132461-4283-265400121920659/AnsiballZ_systemd.py'
Jan 27 08:45:53 compute-0 sudo[222006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:53.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:53 compute-0 python3.9[222008]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:45:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:53 compute-0 systemd[1]: Reloading.
Jan 27 08:45:53 compute-0 systemd-rc-local-generator[222030]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:53 compute-0 systemd-sysv-generator[222035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:54 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 27 08:45:54 compute-0 sudo[222006]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:45:54.228 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:45:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:45:54.229 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:45:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:45:54.229 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:45:54 compute-0 sudo[222196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsqvgctleaxlorgwxuyffmzrjymklclo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503554.47617-4307-94860126957531/AnsiballZ_systemd.py'
Jan 27 08:45:54 compute-0 sudo[222196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:45:55 compute-0 python3.9[222198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 08:45:55 compute-0 ceph-mon[74357]: pgmap v653: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:55 compute-0 systemd[1]: Reloading.
Jan 27 08:45:55 compute-0 systemd-rc-local-generator[222227]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:55 compute-0 systemd-sysv-generator[222230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:55 compute-0 systemd[1]: Reloading.
Jan 27 08:45:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:55.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:55 compute-0 systemd-rc-local-generator[222259]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:45:55 compute-0 systemd-sysv-generator[222265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:45:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:55.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:55 compute-0 sudo[222196]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:56 compute-0 sshd-session[161123]: Connection closed by 192.168.122.30 port 33518
Jan 27 08:45:56 compute-0 sshd-session[161060]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:45:56 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 27 08:45:56 compute-0 systemd[1]: session-49.scope: Consumed 3min 27.525s CPU time.
Jan 27 08:45:56 compute-0 systemd-logind[799]: Session 49 logged out. Waiting for processes to exit.
Jan 27 08:45:56 compute-0 systemd-logind[799]: Removed session 49.
Jan 27 08:45:57 compute-0 ceph-mon[74357]: pgmap v654: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:45:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:57.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:45:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:45:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:57.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:58 compute-0 podman[222296]: 2026-01-27 08:45:58.291759151 +0000 UTC m=+0.088704036 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 08:45:59 compute-0 ceph-mon[74357]: pgmap v655: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:45:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:45:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:45:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:45:59 compute-0 sudo[222316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:59 compute-0 sudo[222316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:45:59 compute-0 sudo[222316]: pam_unix(sudo:session): session closed for user root
Jan 27 08:45:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.003000081s ======
Jan 27 08:45:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:45:59.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Jan 27 08:45:59 compute-0 sudo[222341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:45:59 compute-0 sudo[222341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:45:59 compute-0 sudo[222341]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:01 compute-0 ceph-mon[74357]: pgmap v656: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:01 compute-0 sshd-session[222367]: Accepted publickey for zuul from 192.168.122.30 port 45636 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:46:01 compute-0 systemd-logind[799]: New session 50 of user zuul.
Jan 27 08:46:01 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 27 08:46:01 compute-0 sshd-session[222367]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:46:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:02 compute-0 python3.9[222520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:46:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:03 compute-0 ceph-mon[74357]: pgmap v657: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:03.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:03 compute-0 python3.9[222675]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:46:03 compute-0 network[222692]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:46:04 compute-0 network[222693]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:46:04 compute-0 network[222694]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:46:05 compute-0 ceph-mon[74357]: pgmap v658: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:05.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:07 compute-0 ceph-mon[74357]: pgmap v659: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:07.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:08 compute-0 ceph-mon[74357]: pgmap v660: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:08 compute-0 sudo[222966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjehbldzggiyihzkhokpfwczslmmwisp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503568.5993736-101-58687244785135/AnsiballZ_setup.py'
Jan 27 08:46:08 compute-0 sudo[222966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:09 compute-0 python3.9[222968]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:46:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:09 compute-0 sudo[222966]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:09 compute-0 sudo[223051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckgzyzswfxrkxmbsdalyxhzjsahmxuqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503568.5993736-101-58687244785135/AnsiballZ_dnf.py'
Jan 27 08:46:09 compute-0 sudo[223051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:10 compute-0 python3.9[223053]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:46:10 compute-0 ceph-mon[74357]: pgmap v661: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:11.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:11.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:12 compute-0 ceph-mon[74357]: pgmap v662: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:13.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:13.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:14 compute-0 ceph-mon[74357]: pgmap v663: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:46:14
Jan 27 08:46:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:46:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:46:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', '.rgw.root', 'images', 'default.rgw.log', 'backups', 'default.rgw.control']
Jan 27 08:46:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:46:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:15.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:15 compute-0 sudo[223051]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:15.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:16 compute-0 sudo[223207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvluzazrtgjjvamjkjrocuaelhhwjru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503575.7260664-137-214446523775077/AnsiballZ_stat.py'
Jan 27 08:46:16 compute-0 sudo[223207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:16 compute-0 python3.9[223209]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:46:16 compute-0 sudo[223207]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:16 compute-0 ceph-mon[74357]: pgmap v664: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:16 compute-0 sudo[223359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpatybdusxgeadrxngrjsplpdmqkbyhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503576.5921538-167-134209440396102/AnsiballZ_command.py'
Jan 27 08:46:16 compute-0 sudo[223359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:17 compute-0 python3.9[223361]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:46:17 compute-0 sudo[223359]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:17.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:17 compute-0 sudo[223513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gabhuxubfmngjuyjglzmjmucblbgotaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503577.5891538-197-108286104628434/AnsiballZ_stat.py'
Jan 27 08:46:17 compute-0 sudo[223513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:18 compute-0 python3.9[223515]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:46:18 compute-0 sudo[223513]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:18 compute-0 sudo[223665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkpgpxemabpqxoqkwlnuasnmnpkmksyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503578.2967026-221-211251061630779/AnsiballZ_command.py'
Jan 27 08:46:18 compute-0 sudo[223665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:18 compute-0 ceph-mon[74357]: pgmap v665: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:18 compute-0 python3.9[223667]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:46:18 compute-0 sudo[223665]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:19.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:19 compute-0 sudo[223819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhbllrxsyoqexoczhejjvpadwbcblad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503579.006751-245-100188517731571/AnsiballZ_stat.py'
Jan 27 08:46:19 compute-0 sudo[223819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:19.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:19 compute-0 python3.9[223821]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:46:19 compute-0 sudo[223819]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:19 compute-0 sudo[223822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:19 compute-0 sudo[223822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:19 compute-0 sudo[223822]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:19 compute-0 sudo[223876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:19 compute-0 sudo[223876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:19 compute-0 sudo[223876]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:20 compute-0 podman[223864]: 2026-01-27 08:46:20.03299194 +0000 UTC m=+0.146439481 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 27 08:46:20 compute-0 sudo[224019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flyjcsvyfunrogdwkioysrghcqpuwwjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503579.006751-245-100188517731571/AnsiballZ_copy.py'
Jan 27 08:46:20 compute-0 sudo[224019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:20 compute-0 python3.9[224021]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503579.006751-245-100188517731571/.source.iscsi _original_basename=.1sv73fvi follow=False checksum=a7d061de3df8d2b2d58d4f504a82efdcc16302de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:20 compute-0 sudo[224019]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:20 compute-0 ceph-mon[74357]: pgmap v666: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:21 compute-0 sudo[224172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqyumoodfeoljlltltedubuitjgfwlav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503580.6993742-290-114765046503277/AnsiballZ_file.py'
Jan 27 08:46:21 compute-0 sudo[224172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:21 compute-0 python3.9[224174]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:21 compute-0 sudo[224172]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:21.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:21.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:22 compute-0 sudo[224324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqhdrvbgfrhqitjnamcqtcmrylorpwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503581.5857854-314-150697347101/AnsiballZ_lineinfile.py'
Jan 27 08:46:22 compute-0 sudo[224324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:22 compute-0 python3.9[224326]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:22 compute-0 sudo[224324]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:22 compute-0 ceph-mon[74357]: pgmap v667: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:23 compute-0 sudo[224477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbngcwgkfbgswsgothkcfaxszxkrwgee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503582.5317233-341-178329209556023/AnsiballZ_systemd_service.py'
Jan 27 08:46:23 compute-0 sudo[224477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:23 compute-0 python3.9[224479]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:46:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:46:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:23.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:46:23 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 27 08:46:23 compute-0 sudo[224477]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:23.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:24 compute-0 sudo[224633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sklpbwdqiwonqasrfqzmbkbbbembtuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503583.8756762-365-155109547417102/AnsiballZ_systemd_service.py'
Jan 27 08:46:24 compute-0 sudo[224633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:46:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:46:24 compute-0 python3.9[224635]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:46:24 compute-0 systemd[1]: Reloading.
Jan 27 08:46:24 compute-0 systemd-rc-local-generator[224666]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:46:24 compute-0 systemd-sysv-generator[224670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:46:24 compute-0 ceph-mon[74357]: pgmap v668: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:24 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 08:46:24 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 08:46:24 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 27 08:46:24 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 08:46:24 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 27 08:46:24 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 27 08:46:24 compute-0 sudo[224633]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:25.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:25.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:25 compute-0 python3.9[224834]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:46:26 compute-0 network[224851]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:46:26 compute-0 network[224852]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:46:26 compute-0 network[224853]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:46:26 compute-0 ceph-mon[74357]: pgmap v669: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:27.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:46:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:27.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:46:28 compute-0 podman[224931]: 2026-01-27 08:46:28.423676798 +0000 UTC m=+0.068374649 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 08:46:28 compute-0 ceph-mon[74357]: pgmap v670: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:29.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:29.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:30 compute-0 ceph-mon[74357]: pgmap v671: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:30 compute-0 sudo[225144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzilgyjzjffwojvmpiyynotzpeysyal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503590.674527-434-132250818663738/AnsiballZ_dnf.py'
Jan 27 08:46:30 compute-0 sudo[225144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:31 compute-0 python3.9[225146]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:46:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:31.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:33 compute-0 ceph-mon[74357]: pgmap v672: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:46:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:46:33 compute-0 systemd[1]: Reloading.
Jan 27 08:46:33 compute-0 systemd-sysv-generator[225198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:46:33 compute-0 systemd-rc-local-generator[225195]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:46:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:46:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:46:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:46:34 compute-0 systemd[1]: run-re6b01144d4754758b8c6f5209e93690b.service: Deactivated successfully.
Jan 27 08:46:34 compute-0 ceph-mon[74357]: pgmap v673: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:34 compute-0 sudo[225144]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:35 compute-0 sudo[225463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtbmvahhmdgtdahvfvpjivtpsgjophqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503595.0238256-461-11433373550903/AnsiballZ_file.py'
Jan 27 08:46:35 compute-0 sudo[225463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:35 compute-0 python3.9[225465]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 08:46:35 compute-0 sudo[225463]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:36 compute-0 sudo[225615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwjkamqtdftrbgfcsrmjolwzbqecyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503595.7423944-485-209064882119498/AnsiballZ_modprobe.py'
Jan 27 08:46:36 compute-0 sudo[225615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:36 compute-0 python3.9[225617]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 27 08:46:36 compute-0 ceph-mon[74357]: pgmap v674: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:36 compute-0 sudo[225615]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:37 compute-0 sudo[225771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmdwtzmkvfiwtfnlkugrdrwviqriaycl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503596.7054605-509-169891347274174/AnsiballZ_stat.py'
Jan 27 08:46:37 compute-0 sudo[225771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:37 compute-0 python3.9[225774]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:46:37 compute-0 sudo[225771]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:37 compute-0 sudo[225895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbfangsnmqxslmaxwbzfphiumkoguiev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503596.7054605-509-169891347274174/AnsiballZ_copy.py'
Jan 27 08:46:37 compute-0 sudo[225895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:37 compute-0 python3.9[225897]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503596.7054605-509-169891347274174/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:37 compute-0 sudo[225895]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:38 compute-0 ceph-mon[74357]: pgmap v675: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:38 compute-0 sudo[226047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhtybjwwxedkzjqkyjbycweynemibin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503598.2699294-557-332783487499/AnsiballZ_lineinfile.py'
Jan 27 08:46:38 compute-0 sudo[226047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:38 compute-0 python3.9[226049]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:38 compute-0 sudo[226047]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:39 compute-0 sudo[226200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-samixobrjhtziytnrxriegixupvlqcwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503598.9662967-581-80090404417459/AnsiballZ_systemd.py'
Jan 27 08:46:39 compute-0 sudo[226200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:39 compute-0 python3.9[226202]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:46:39 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 08:46:39 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 08:46:39 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 08:46:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 08:46:39 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 08:46:39 compute-0 sudo[226200]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:40 compute-0 sudo[226207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:40 compute-0 sudo[226207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:40 compute-0 sudo[226207]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:40 compute-0 sudo[226256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:40 compute-0 sudo[226256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:40 compute-0 sudo[226256]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:40 compute-0 sudo[226406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqxkjzmqvpjxlerfytakskkfajrhyghj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503600.1669414-605-172650157478218/AnsiballZ_command.py'
Jan 27 08:46:40 compute-0 sudo[226406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:40 compute-0 ceph-mon[74357]: pgmap v676: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:40 compute-0 python3.9[226408]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:46:40 compute-0 sudo[226406]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:41 compute-0 sudo[226560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfxhrjvrkxyqhkderqzdgqkdbsvpyrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503601.0309882-635-61820502335905/AnsiballZ_stat.py'
Jan 27 08:46:41 compute-0 sudo[226560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:41 compute-0 sudo[226563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:41 compute-0 sudo[226563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:41 compute-0 sudo[226563]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:41.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:41 compute-0 sudo[226588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:46:41 compute-0 sudo[226588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:41 compute-0 sudo[226588]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:41 compute-0 python3.9[226562]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:46:41 compute-0 sudo[226560]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:41 compute-0 sudo[226613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:41 compute-0 sudo[226613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:41 compute-0 sudo[226613]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:41 compute-0 sudo[226651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:46:41 compute-0 sudo[226651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:41.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:42 compute-0 sudo[226831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyubkrwraxbaldglkttmrngwwhnafssv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503601.821551-662-135874805503792/AnsiballZ_stat.py'
Jan 27 08:46:42 compute-0 sudo[226831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:42 compute-0 sudo[226651]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:42 compute-0 python3.9[226838]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:46:42 compute-0 sudo[226831]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:42 compute-0 ceph-mon[74357]: pgmap v677: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:42 compute-0 sudo[226966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usnpzktaossorhzbgdxosxugvxylqway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503601.821551-662-135874805503792/AnsiballZ_copy.py'
Jan 27 08:46:42 compute-0 sudo[226966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:42 compute-0 python3.9[226968]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503601.821551-662-135874805503792/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:42 compute-0 sudo[226966]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:43 compute-0 sudo[227119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcspecjeqxsuwfviooaetmyomafsimrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503603.374354-707-225877999803552/AnsiballZ_command.py'
Jan 27 08:46:43 compute-0 sudo[227119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:43.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:43 compute-0 python3.9[227121]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:46:43 compute-0 sudo[227119]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:46:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:46:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:44 compute-0 sudo[227272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbbkktsjqlaotbcbdkblmgtwbylxcpcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503604.2419357-731-253541202394722/AnsiballZ_lineinfile.py'
Jan 27 08:46:44 compute-0 sudo[227272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:44 compute-0 python3.9[227274]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:44 compute-0 sudo[227272]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:46:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:46:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:46:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 09d5c53a-ee16-4629-8e18-d7098ab4c7f5 does not exist
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 64587eed-d8bc-41e1-abdd-24537fff91f3 does not exist
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5e276377-14b4-4883-be30-567883ae8e27 does not exist
Jan 27 08:46:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:46:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:46:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:46:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:46:45 compute-0 sudo[227352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:45 compute-0 sudo[227352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:45 compute-0 sudo[227352]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:45 compute-0 sudo[227377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:46:45 compute-0 sudo[227377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:45 compute-0 sudo[227377]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:45 compute-0 ceph-mon[74357]: pgmap v678: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:46:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:46:45 compute-0 sudo[227402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:45 compute-0 sudo[227402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:45 compute-0 sudo[227402]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:45 compute-0 sudo[227427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:46:45 compute-0 sudo[227427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:45 compute-0 sudo[227537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebnanxasqipgafytfeiifzabqbxuiycl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503605.0125592-755-213643600776073/AnsiballZ_replace.py'
Jan 27 08:46:45 compute-0 sudo[227537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.598693157 +0000 UTC m=+0.040168254 container create 668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:46:45 compute-0 systemd[1]: Started libpod-conmon-668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732.scope.
Jan 27 08:46:45 compute-0 python3.9[227539]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.579167411 +0000 UTC m=+0.020642558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:46:45 compute-0 sudo[227537]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.685828179 +0000 UTC m=+0.127303296 container init 668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.69242609 +0000 UTC m=+0.133901197 container start 668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.695908565 +0000 UTC m=+0.137383702 container attach 668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:46:45 compute-0 nice_ptolemy[227584]: 167 167
Jan 27 08:46:45 compute-0 systemd[1]: libpod-668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732.scope: Deactivated successfully.
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.699796913 +0000 UTC m=+0.141272020 container died 668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce20d39629d5f2535281d32ae97d43f7a874e65316ea6bf2891a6053b7e01733-merged.mount: Deactivated successfully.
Jan 27 08:46:45 compute-0 podman[227568]: 2026-01-27 08:46:45.737024594 +0000 UTC m=+0.178499691 container remove 668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:46:45 compute-0 systemd[1]: libpod-conmon-668dc9c3d06f61b88f857477ffab7a78726117ea07a7c3ee9c1edcad0d842732.scope: Deactivated successfully.
Jan 27 08:46:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:46:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:45.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:46:45 compute-0 podman[227631]: 2026-01-27 08:46:45.878647482 +0000 UTC m=+0.035457655 container create 346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:46:45 compute-0 systemd[1]: Started libpod-conmon-346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0.scope.
Jan 27 08:46:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f27433cf7789e9f1b39de00c63cf51bb18e38751690126300bd12ffcba93a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f27433cf7789e9f1b39de00c63cf51bb18e38751690126300bd12ffcba93a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f27433cf7789e9f1b39de00c63cf51bb18e38751690126300bd12ffcba93a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f27433cf7789e9f1b39de00c63cf51bb18e38751690126300bd12ffcba93a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f27433cf7789e9f1b39de00c63cf51bb18e38751690126300bd12ffcba93a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:45 compute-0 podman[227631]: 2026-01-27 08:46:45.862826397 +0000 UTC m=+0.019636600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:46:45 compute-0 podman[227631]: 2026-01-27 08:46:45.966188255 +0000 UTC m=+0.122998438 container init 346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cartwright, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:46:45 compute-0 podman[227631]: 2026-01-27 08:46:45.97767634 +0000 UTC m=+0.134486513 container start 346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:46:45 compute-0 podman[227631]: 2026-01-27 08:46:45.982382679 +0000 UTC m=+0.139192862 container attach 346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cartwright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:46:46 compute-0 sudo[227778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbchsbitexwslznubrqzxfdztcnuuncm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503605.9076774-779-38897684391641/AnsiballZ_replace.py'
Jan 27 08:46:46 compute-0 sudo[227778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:46 compute-0 python3.9[227780]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:46 compute-0 sudo[227778]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:46 compute-0 festive_cartwright[227675]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:46:46 compute-0 festive_cartwright[227675]: --> relative data size: 1.0
Jan 27 08:46:46 compute-0 festive_cartwright[227675]: --> All data devices are unavailable
Jan 27 08:46:46 compute-0 systemd[1]: libpod-346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0.scope: Deactivated successfully.
Jan 27 08:46:46 compute-0 podman[227631]: 2026-01-27 08:46:46.768823256 +0000 UTC m=+0.925633449 container died 346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31f27433cf7789e9f1b39de00c63cf51bb18e38751690126300bd12ffcba93a-merged.mount: Deactivated successfully.
Jan 27 08:46:46 compute-0 podman[227631]: 2026-01-27 08:46:46.830923691 +0000 UTC m=+0.987733864 container remove 346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 27 08:46:46 compute-0 systemd[1]: libpod-conmon-346269ddbf4f3999ffb92602ee250e1714c9b00de03b6337eb013b7a7c3c59c0.scope: Deactivated successfully.
Jan 27 08:46:46 compute-0 sudo[227427]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:46 compute-0 sudo[227954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqxytreoeemlimwqfcpasvpbztshugho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503606.6180685-806-60897079076064/AnsiballZ_lineinfile.py'
Jan 27 08:46:46 compute-0 sudo[227954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:46 compute-0 sudo[227955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:46 compute-0 sudo[227955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:46 compute-0 sudo[227955]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:46 compute-0 sudo[227982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:46:46 compute-0 sudo[227982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:46 compute-0 sudo[227982]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:47 compute-0 sudo[228007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:47 compute-0 sudo[228007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:47 compute-0 sudo[228007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:47 compute-0 python3.9[227961]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:47 compute-0 sudo[228032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:46:47 compute-0 sudo[228032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:47 compute-0 sudo[227954]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:47 compute-0 ceph-mon[74357]: pgmap v679: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.371234262 +0000 UTC m=+0.040539014 container create 6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:46:47 compute-0 systemd[1]: Started libpod-conmon-6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810.scope.
Jan 27 08:46:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.444027 +0000 UTC m=+0.113331762 container init 6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.354921895 +0000 UTC m=+0.024226677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.450479707 +0000 UTC m=+0.119784449 container start 6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.453418598 +0000 UTC m=+0.122723360 container attach 6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:46:47 compute-0 interesting_mccarthy[228233]: 167 167
Jan 27 08:46:47 compute-0 systemd[1]: libpod-6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810.scope: Deactivated successfully.
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.456587065 +0000 UTC m=+0.125891827 container died 6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7babb20fce673ba859993bfff5ea83abd9883192c25e62896c2c20a31ce68cfd-merged.mount: Deactivated successfully.
Jan 27 08:46:47 compute-0 sudo[228274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdqdydotzqwwojzlxmeascpcfsatyblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503607.2127006-806-190571814573166/AnsiballZ_lineinfile.py'
Jan 27 08:46:47 compute-0 sudo[228274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:47 compute-0 podman[228180]: 2026-01-27 08:46:47.492245344 +0000 UTC m=+0.161550106 container remove 6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:46:47 compute-0 systemd[1]: libpod-conmon-6cb507b07ed5e965f915d2695b65fe4f1f151bfa6bb96fc74c10854b4a36c810.scope: Deactivated successfully.
Jan 27 08:46:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:47 compute-0 podman[228289]: 2026-01-27 08:46:47.648868873 +0000 UTC m=+0.050799096 container create 4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:46:47 compute-0 python3.9[228281]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:47 compute-0 sudo[228274]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:47 compute-0 systemd[1]: Started libpod-conmon-4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f.scope.
Jan 27 08:46:47 compute-0 podman[228289]: 2026-01-27 08:46:47.628284028 +0000 UTC m=+0.030214221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:46:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe8b540e86293356dd6759e8f4c0d60a7915293c7916dcf4d1a574aec35f07e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe8b540e86293356dd6759e8f4c0d60a7915293c7916dcf4d1a574aec35f07e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe8b540e86293356dd6759e8f4c0d60a7915293c7916dcf4d1a574aec35f07e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe8b540e86293356dd6759e8f4c0d60a7915293c7916dcf4d1a574aec35f07e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:47 compute-0 podman[228289]: 2026-01-27 08:46:47.742627046 +0000 UTC m=+0.144557249 container init 4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:46:47 compute-0 podman[228289]: 2026-01-27 08:46:47.754212204 +0000 UTC m=+0.156142427 container start 4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cerf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 08:46:47 compute-0 podman[228289]: 2026-01-27 08:46:47.760721693 +0000 UTC m=+0.162651876 container attach 4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:46:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:48 compute-0 sudo[228460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iplxhfcjpvwqmdnhtopnymhgzvgxgvvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503607.8227558-806-237262735738757/AnsiballZ_lineinfile.py'
Jan 27 08:46:48 compute-0 sudo[228460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:48 compute-0 python3.9[228462]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:48 compute-0 sudo[228460]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:48 compute-0 quirky_cerf[228306]: {
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:     "0": [
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:         {
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "devices": [
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "/dev/loop3"
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             ],
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "lv_name": "ceph_lv0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "lv_size": "7511998464",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "name": "ceph_lv0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "tags": {
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.cluster_name": "ceph",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.crush_device_class": "",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.encrypted": "0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.osd_id": "0",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.type": "block",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:                 "ceph.vdo": "0"
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             },
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "type": "block",
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:             "vg_name": "ceph_vg0"
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:         }
Jan 27 08:46:48 compute-0 quirky_cerf[228306]:     ]
Jan 27 08:46:48 compute-0 quirky_cerf[228306]: }
Jan 27 08:46:48 compute-0 systemd[1]: libpod-4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f.scope: Deactivated successfully.
Jan 27 08:46:48 compute-0 podman[228289]: 2026-01-27 08:46:48.550593684 +0000 UTC m=+0.952523857 container died 4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cerf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 08:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe8b540e86293356dd6759e8f4c0d60a7915293c7916dcf4d1a574aec35f07e-merged.mount: Deactivated successfully.
Jan 27 08:46:48 compute-0 podman[228289]: 2026-01-27 08:46:48.60909055 +0000 UTC m=+1.011020753 container remove 4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:46:48 compute-0 systemd[1]: libpod-conmon-4186bd90004510eda1ecb5b6e9226a5e0259f6437b444cf5cb5fe7cdda8d6b5f.scope: Deactivated successfully.
Jan 27 08:46:48 compute-0 ceph-mon[74357]: pgmap v680: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:48 compute-0 sudo[228032]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:48 compute-0 sudo[228597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:48 compute-0 sudo[228597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:48 compute-0 sudo[228597]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:48 compute-0 sudo[228657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibodpshufasjfljlsndxtgkmljkcchdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503608.4411285-806-64759032134003/AnsiballZ_lineinfile.py'
Jan 27 08:46:48 compute-0 sudo[228657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:48 compute-0 sudo[228646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:46:48 compute-0 sudo[228646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:48 compute-0 sudo[228646]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:48 compute-0 sudo[228679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:48 compute-0 sudo[228679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:48 compute-0 sudo[228679]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:48 compute-0 sudo[228704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:46:48 compute-0 sudo[228704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:48 compute-0 python3.9[228676]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:48 compute-0 sudo[228657]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.138367638 +0000 UTC m=+0.039785103 container create f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:46:49 compute-0 systemd[1]: Started libpod-conmon-f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61.scope.
Jan 27 08:46:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.207816485 +0000 UTC m=+0.109233970 container init f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.120163279 +0000 UTC m=+0.021580794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.216539134 +0000 UTC m=+0.117956599 container start f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.219462324 +0000 UTC m=+0.120879809 container attach f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shtern, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:46:49 compute-0 agitated_shtern[228809]: 167 167
Jan 27 08:46:49 compute-0 systemd[1]: libpod-f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61.scope: Deactivated successfully.
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.222579639 +0000 UTC m=+0.123997104 container died f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e78f9aa64ddb5c99084b09e77588e51e475ecc0649fe01606b7a61ee901136be-merged.mount: Deactivated successfully.
Jan 27 08:46:49 compute-0 podman[228793]: 2026-01-27 08:46:49.265016194 +0000 UTC m=+0.166433659 container remove f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shtern, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:46:49 compute-0 systemd[1]: libpod-conmon-f82472f416ae28fd7e7328b09f78d4406230e6a3906e8783f2c05f8d73ee2c61.scope: Deactivated successfully.
Jan 27 08:46:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:49 compute-0 podman[228853]: 2026-01-27 08:46:49.405999165 +0000 UTC m=+0.037839450 container create ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:46:49 compute-0 systemd[1]: Started libpod-conmon-ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa.scope.
Jan 27 08:46:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a9ccfce0f2d7940e84b54f715eadd008e6f0a3c46209d2dea3f3d6ffb3554d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a9ccfce0f2d7940e84b54f715eadd008e6f0a3c46209d2dea3f3d6ffb3554d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a9ccfce0f2d7940e84b54f715eadd008e6f0a3c46209d2dea3f3d6ffb3554d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a9ccfce0f2d7940e84b54f715eadd008e6f0a3c46209d2dea3f3d6ffb3554d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:46:49 compute-0 podman[228853]: 2026-01-27 08:46:49.390270053 +0000 UTC m=+0.022110348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:46:49 compute-0 podman[228853]: 2026-01-27 08:46:49.49254507 +0000 UTC m=+0.124385375 container init ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leakey, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:46:49 compute-0 podman[228853]: 2026-01-27 08:46:49.499711407 +0000 UTC m=+0.131551682 container start ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 08:46:49 compute-0 podman[228853]: 2026-01-27 08:46:49.504055596 +0000 UTC m=+0.135895871 container attach ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 08:46:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:49.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:49 compute-0 sudo[228979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqkzzwsspeykcdfnirctbbmhodfpnbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503609.3715386-893-199649424048505/AnsiballZ_stat.py'
Jan 27 08:46:49 compute-0 sudo[228979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:49.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:49 compute-0 python3.9[228981]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:46:49 compute-0 sudo[228979]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]: {
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:         "osd_id": 0,
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:         "type": "bluestore"
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]:     }
Jan 27 08:46:50 compute-0 vigilant_leakey[228901]: }
Jan 27 08:46:50 compute-0 podman[229058]: 2026-01-27 08:46:50.311400337 +0000 UTC m=+0.126928055 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:46:50 compute-0 systemd[1]: libpod-ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa.scope: Deactivated successfully.
Jan 27 08:46:50 compute-0 podman[228853]: 2026-01-27 08:46:50.314863972 +0000 UTC m=+0.946704267 container died ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a9ccfce0f2d7940e84b54f715eadd008e6f0a3c46209d2dea3f3d6ffb3554d-merged.mount: Deactivated successfully.
Jan 27 08:46:50 compute-0 podman[228853]: 2026-01-27 08:46:50.385193722 +0000 UTC m=+1.017033997 container remove ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:46:50 compute-0 systemd[1]: libpod-conmon-ba039362907d1d61bdca6c0c09684c206704ae1e99f1faaaa046b0ae9d162afa.scope: Deactivated successfully.
Jan 27 08:46:50 compute-0 sudo[229185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phdtqsxkwltfbydlxuxwqmpvhtrmqqgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503610.1123033-917-99623335108714/AnsiballZ_command.py'
Jan 27 08:46:50 compute-0 sudo[228704]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:50 compute-0 sudo[229185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:46:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:46:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:50 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b39653a5-39cc-47c4-a698-ca89868966e2 does not exist
Jan 27 08:46:50 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 4ec5ec81-9a1f-460a-83e0-e9ed7ce75cd9 does not exist
Jan 27 08:46:50 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b18130f8-3aa4-4c81-80e9-79ecc97cc3e6 does not exist
Jan 27 08:46:50 compute-0 sudo[229188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:46:50 compute-0 sudo[229188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:50 compute-0 sudo[229188]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:50 compute-0 sudo[229213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:46:50 compute-0 sudo[229213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:46:50 compute-0 sudo[229213]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:50 compute-0 python3.9[229187]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:46:50 compute-0 sudo[229185]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:50 compute-0 ceph-mon[74357]: pgmap v681: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:46:51 compute-0 sudo[229389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqjehfacgkmbubjdmxzkhbigqnbkitjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503610.9524221-944-16300515991028/AnsiballZ_systemd_service.py'
Jan 27 08:46:51 compute-0 sudo[229389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:46:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:51.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:46:51 compute-0 python3.9[229391]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:46:51 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 27 08:46:51 compute-0 sudo[229389]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:51.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:52 compute-0 sudo[229545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjkesruqfoqvvsnkerrjcbtqhhoppmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503611.8742797-968-112438070620523/AnsiballZ_systemd_service.py'
Jan 27 08:46:52 compute-0 sudo[229545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:52 compute-0 python3.9[229547]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:46:52 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 27 08:46:52 compute-0 udevadm[229552]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 27 08:46:52 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 27 08:46:52 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 08:46:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:52 compute-0 multipathd[229556]: --------start up--------
Jan 27 08:46:52 compute-0 multipathd[229556]: read /etc/multipath.conf
Jan 27 08:46:52 compute-0 multipathd[229556]: path checkers start up
Jan 27 08:46:52 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 08:46:52 compute-0 sudo[229545]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:52 compute-0 ceph-mon[74357]: pgmap v682: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:53 compute-0 sudo[229714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsdwdshifrzsouudbgurxkxavxwzjzgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503613.254569-1004-280011737246031/AnsiballZ_file.py'
Jan 27 08:46:53 compute-0 sudo[229714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:53 compute-0 python3.9[229716]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 08:46:53 compute-0 sudo[229714]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:53.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:46:54.229 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:46:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:46:54.230 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:46:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:46:54.230 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:46:54 compute-0 sudo[229866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgdavsnghovfwficfsrpcdzgrpvczldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503613.97634-1028-235135219352683/AnsiballZ_modprobe.py'
Jan 27 08:46:54 compute-0 sudo[229866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:54 compute-0 python3.9[229868]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 27 08:46:54 compute-0 kernel: Key type psk registered
Jan 27 08:46:54 compute-0 sudo[229866]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:54 compute-0 ceph-mon[74357]: pgmap v683: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:55 compute-0 sudo[230029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqlblrefvslsapxxpzlxleapbvezvnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503614.8556342-1052-110863904943448/AnsiballZ_stat.py'
Jan 27 08:46:55 compute-0 sudo[230029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:55 compute-0 python3.9[230031]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:46:55 compute-0 sudo[230029]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:55 compute-0 sudo[230152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idyjcmhmqgbygcsoamqrdrzsduyqdodt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503614.8556342-1052-110863904943448/AnsiballZ_copy.py'
Jan 27 08:46:55 compute-0 sudo[230152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:55.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:55 compute-0 python3.9[230154]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769503614.8556342-1052-110863904943448/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:55 compute-0 sudo[230152]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:56 compute-0 sudo[230304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udygpthrvrgvjxyqanfkqjlxohfwjntl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503616.2135859-1100-250053420808253/AnsiballZ_lineinfile.py'
Jan 27 08:46:56 compute-0 sudo[230304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:56 compute-0 python3.9[230306]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:46:56 compute-0 sudo[230304]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:56 compute-0 ceph-mon[74357]: pgmap v684: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:57 compute-0 sudo[230457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqanyhxkkszrobevqnqrnbbwlyzczjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503616.9520545-1124-104777497876197/AnsiballZ_systemd.py'
Jan 27 08:46:57 compute-0 sudo[230457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:57 compute-0 python3.9[230459]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:46:57 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 08:46:57 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 08:46:57 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 08:46:57 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 08:46:57 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 08:46:57 compute-0 sudo[230457]: pam_unix(sudo:session): session closed for user root
Jan 27 08:46:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:46:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:46:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:57.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:46:58 compute-0 sudo[230613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-serntzhuesokocomjolfydstguzmeveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503618.0694523-1148-246203889432403/AnsiballZ_dnf.py'
Jan 27 08:46:58 compute-0 sudo[230613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:46:58 compute-0 python3.9[230615]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:46:59 compute-0 ceph-mon[74357]: pgmap v685: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:59 compute-0 podman[230618]: 2026-01-27 08:46:59.233833207 +0000 UTC m=+0.048681426 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:46:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:46:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:46:59.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:46:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:46:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:46:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:46:59.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:00 compute-0 sudo[230639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:00 compute-0 sudo[230639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:00 compute-0 sudo[230639]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:00 compute-0 sudo[230664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:00 compute-0 sudo[230664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:00 compute-0 sudo[230664]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:01 compute-0 ceph-mon[74357]: pgmap v686: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:01 compute-0 systemd[1]: Reloading.
Jan 27 08:47:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:01.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:01 compute-0 systemd-rc-local-generator[230715]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:47:01 compute-0 systemd-sysv-generator[230719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:47:02 compute-0 systemd[1]: Reloading.
Jan 27 08:47:02 compute-0 systemd-rc-local-generator[230755]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:47:02 compute-0 systemd-sysv-generator[230758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:47:02 compute-0 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 08:47:02 compute-0 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 08:47:02 compute-0 lvm[230799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:47:02 compute-0 lvm[230799]: VG ceph_vg0 finished
Jan 27 08:47:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:47:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:47:02 compute-0 systemd[1]: Reloading.
Jan 27 08:47:02 compute-0 systemd-rc-local-generator[230851]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:47:02 compute-0 systemd-sysv-generator[230857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:47:03 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:47:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:03.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:04 compute-0 ceph-mon[74357]: pgmap v687: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:05 compute-0 sudo[230613]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:05 compute-0 ceph-mon[74357]: pgmap v688: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:05.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:06 compute-0 sudo[232155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kofzgtfeacozuwgdzulejqkutkyfgkqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503625.81719-1172-247936519120711/AnsiballZ_systemd_service.py'
Jan 27 08:47:06 compute-0 sudo[232155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:06 compute-0 python3.9[232157]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:47:06 compute-0 iscsid[224676]: iscsid shutting down.
Jan 27 08:47:06 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 27 08:47:06 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 27 08:47:06 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 27 08:47:06 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 08:47:06 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 08:47:06 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 08:47:06 compute-0 sudo[232155]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:47:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:47:06 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.356s CPU time.
Jan 27 08:47:06 compute-0 systemd[1]: run-rc5ee43edb71a483f86f7c1257daea69a.service: Deactivated successfully.
Jan 27 08:47:07 compute-0 sudo[232313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxibjzlwjktnmqmdawcshynfrbrsylth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503626.8906622-1196-98881066633605/AnsiballZ_systemd_service.py'
Jan 27 08:47:07 compute-0 sudo[232313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:07 compute-0 python3.9[232315]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:47:07 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 27 08:47:07 compute-0 multipathd[229556]: exit (signal)
Jan 27 08:47:07 compute-0 multipathd[229556]: --------shut down-------
Jan 27 08:47:07 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 27 08:47:07 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 27 08:47:07 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 08:47:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:07.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:07 compute-0 multipathd[232321]: --------start up--------
Jan 27 08:47:07 compute-0 multipathd[232321]: read /etc/multipath.conf
Jan 27 08:47:07 compute-0 multipathd[232321]: path checkers start up
Jan 27 08:47:07 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 08:47:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:07 compute-0 sudo[232313]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:07.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:09.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:09.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:11 compute-0 ceph-mon[74357]: pgmap v689: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:11.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:11.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:12 compute-0 ceph-mon[74357]: pgmap v690: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:12 compute-0 ceph-mon[74357]: pgmap v691: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:12 compute-0 ceph-mon[74357]: pgmap v692: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:12 compute-0 python3.9[232480]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:47:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:13 compute-0 sudo[232635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmvsmfblkmwtjixyvtwvrwgjmywjjfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503633.4311247-1248-263794522970953/AnsiballZ_file.py'
Jan 27 08:47:13 compute-0 sudo[232635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:13.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:13 compute-0 python3.9[232637]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:13 compute-0 sudo[232635]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:14 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 27 08:47:14 compute-0 ceph-mon[74357]: pgmap v693: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:14 compute-0 sudo[232788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlylcmbnvrzftwtbnrpzblhxvxpubdjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503634.4653602-1281-163820006767842/AnsiballZ_systemd_service.py'
Jan 27 08:47:14 compute-0 sudo[232788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:47:14
Jan 27 08:47:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:47:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:47:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Jan 27 08:47:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:15 compute-0 python3.9[232790]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:47:15 compute-0 systemd[1]: Reloading.
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:47:15 compute-0 systemd-rc-local-generator[232813]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:47:15 compute-0 systemd-sysv-generator[232818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:47:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:15 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 08:47:15 compute-0 sudo[232788]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:15.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:15.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:16 compute-0 python3.9[232977]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:47:16 compute-0 network[232994]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:47:16 compute-0 network[232995]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:47:16 compute-0 network[232996]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:47:16 compute-0 ceph-mon[74357]: pgmap v694: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:17.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:17.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:18 compute-0 ceph-mon[74357]: pgmap v695: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:19.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:19.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:20 compute-0 sudo[233104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:20 compute-0 sudo[233104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:20 compute-0 sudo[233104]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:20 compute-0 sudo[233129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:20 compute-0 sudo[233129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:20 compute-0 sudo[233129]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:20 compute-0 podman[233153]: 2026-01-27 08:47:20.433030984 +0000 UTC m=+0.074960696 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Jan 27 08:47:20 compute-0 ceph-mon[74357]: pgmap v696: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:21.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:21.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:22 compute-0 sudo[233347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njucgogebklxrtmwehzoxmtwihicrblk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503641.7849684-1338-117795045359383/AnsiballZ_systemd_service.py'
Jan 27 08:47:22 compute-0 sudo[233347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:22 compute-0 python3.9[233349]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:22 compute-0 sudo[233347]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:22 compute-0 ceph-mon[74357]: pgmap v697: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:22 compute-0 sudo[233500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfvxjlzebvibgnyqtaauywtioirncvlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503642.482853-1338-125102004204701/AnsiballZ_systemd_service.py'
Jan 27 08:47:22 compute-0 sudo[233500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:23 compute-0 python3.9[233502]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:23 compute-0 sudo[233500]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:23 compute-0 sudo[233654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elbamlmwumljvmfbbflraimouszdrwfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503643.244153-1338-71784315866148/AnsiballZ_systemd_service.py'
Jan 27 08:47:23 compute-0 sudo[233654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:23.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:23 compute-0 python3.9[233656]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:23.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:23 compute-0 sudo[233654]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:47:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:47:24 compute-0 sudo[233807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clqkifuzuylsugsmsvysdanvdizgejny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503643.9843168-1338-46574652274040/AnsiballZ_systemd_service.py'
Jan 27 08:47:24 compute-0 sudo[233807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:24 compute-0 python3.9[233809]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:24 compute-0 sudo[233807]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:24 compute-0 ceph-mon[74357]: pgmap v698: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:24 compute-0 sudo[233960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohoyyiublytqmdpkwoxzjslszqiidvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503644.6903045-1338-215207844304952/AnsiballZ_systemd_service.py'
Jan 27 08:47:25 compute-0 sudo[233960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:25 compute-0 python3.9[233962]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:25 compute-0 sudo[233960]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:25.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:25 compute-0 sudo[234114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psterzsbyoaouritdfgfjobrrtdldvtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503645.446924-1338-57916611366261/AnsiballZ_systemd_service.py'
Jan 27 08:47:25 compute-0 sudo[234114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:25.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:26 compute-0 python3.9[234116]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:26 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 08:47:26 compute-0 sudo[234114]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:26 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 27 08:47:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:47:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3783 writes, 16K keys, 3783 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3783 writes, 3783 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1402 writes, 5711 keys, 1402 commit groups, 1.0 writes per commit group, ingest: 9.84 MB, 0.02 MB/s
                                           Interval WAL: 1402 writes, 1402 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.4      0.16              0.05         7    0.022       0      0       0.0       0.0
                                             L6      1/0    7.63 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    147.0    120.3      0.42              0.13         6    0.070     26K   3291       0.0       0.0
                                            Sum      1/0    7.63 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7    107.4    120.6      0.58              0.17        13    0.044     26K   3291       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    172.7    173.9      0.19              0.09         6    0.032     14K   1988       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    147.0    120.3      0.42              0.13         6    0.070     26K   3291       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.5      0.15              0.05         6    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.018, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.6 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f59eb431f0#2 capacity: 308.00 MB usage: 2.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(107,1.85 MB,0.599353%) FilterBlock(14,82.42 KB,0.0261332%) IndexBlock(14,168.17 KB,0.0533215%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 08:47:26 compute-0 sudo[234269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubxcpmqrqymvsdzxbhmvpjegceyfgzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503646.2776423-1338-87443691430799/AnsiballZ_systemd_service.py'
Jan 27 08:47:26 compute-0 sudo[234269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:26 compute-0 ceph-mon[74357]: pgmap v699: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:26 compute-0 python3.9[234271]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:26 compute-0 sudo[234269]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:27 compute-0 sudo[234423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjrfvwonhltnhcvarjuzobqnmyiusax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503647.0677931-1338-226517869378348/AnsiballZ_systemd_service.py'
Jan 27 08:47:27 compute-0 sudo[234423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:27.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:27 compute-0 python3.9[234425]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:47:27 compute-0 sudo[234423]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:27.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:28 compute-0 sudo[234576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhvqkjuaoamtqwcpijibtcptupxyvpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503648.4048707-1515-247000211920624/AnsiballZ_file.py'
Jan 27 08:47:28 compute-0 sudo[234576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:28 compute-0 ceph-mon[74357]: pgmap v700: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:28 compute-0 python3.9[234578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:28 compute-0 sudo[234576]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:29 compute-0 sudo[234729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtmdfifumnfafoeloddycbrbfqjnbhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503649.0034697-1515-68849863305807/AnsiballZ_file.py'
Jan 27 08:47:29 compute-0 sudo[234729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:29 compute-0 podman[234731]: 2026-01-27 08:47:29.425870995 +0000 UTC m=+0.082772532 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 08:47:29 compute-0 python3.9[234732]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:29 compute-0 sudo[234729]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:29.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:29.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:29 compute-0 sudo[234899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcuulvldybngboqwfmynjqvirapfkvyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503649.6897254-1515-42776565898273/AnsiballZ_file.py'
Jan 27 08:47:29 compute-0 sudo[234899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:30 compute-0 python3.9[234901]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:30 compute-0 sudo[234899]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:30 compute-0 sudo[235051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkfcxydsobercvyvxfdaqjfhdhgwcxza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503650.3529062-1515-134619044884570/AnsiballZ_file.py'
Jan 27 08:47:30 compute-0 sudo[235051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:30 compute-0 python3.9[235053]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:30 compute-0 ceph-mon[74357]: pgmap v701: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:30 compute-0 sudo[235051]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:31 compute-0 sudo[235204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsjglexuozpqpgnglhkvdoophubskyha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503651.006912-1515-127943892034853/AnsiballZ_file.py'
Jan 27 08:47:31 compute-0 sudo[235204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:31 compute-0 python3.9[235206]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:31 compute-0 sudo[235204]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:31.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:31.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:32 compute-0 sudo[235356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejcymwqylvnwhkwicshkffzbqtnbltsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503651.7190201-1515-33157596324049/AnsiballZ_file.py'
Jan 27 08:47:32 compute-0 sudo[235356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:32 compute-0 python3.9[235358]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:32 compute-0 sudo[235356]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:32 compute-0 sudo[235508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpuhnzogvfkcmxicwnypqrtumshaadz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503652.478814-1515-205725365523622/AnsiballZ_file.py'
Jan 27 08:47:32 compute-0 sudo[235508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:32 compute-0 ceph-mon[74357]: pgmap v702: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:33 compute-0 python3.9[235510]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:33 compute-0 sudo[235508]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:33 compute-0 sudo[235661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gastueidddvoscfnekiuhvzhphesstqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503653.1631768-1515-64903061634674/AnsiballZ_file.py'
Jan 27 08:47:33 compute-0 sudo[235661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:33 compute-0 python3.9[235663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:33 compute-0 sudo[235661]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:33.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:34 compute-0 sudo[235813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdjulcvuxmwumsgffjlqqgbmaimfueth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503654.093-1686-270188335519366/AnsiballZ_file.py'
Jan 27 08:47:34 compute-0 sudo[235813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:34 compute-0 python3.9[235815]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:34 compute-0 sudo[235813]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:34 compute-0 ceph-mon[74357]: pgmap v703: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:35 compute-0 sudo[235966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vundtlixwahtqdahpaimkamxbmovmiow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503654.8315423-1686-77449386053814/AnsiballZ_file.py'
Jan 27 08:47:35 compute-0 sudo[235966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:35 compute-0 python3.9[235968]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:35 compute-0 sudo[235966]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:35.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:35 compute-0 sudo[236118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udsfuzojuximepocsusdizhjpbiakxnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503655.5152955-1686-233508475459665/AnsiballZ_file.py'
Jan 27 08:47:35 compute-0 sudo[236118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:35.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:36 compute-0 python3.9[236120]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:36 compute-0 sudo[236118]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:36 compute-0 sudo[236270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkrmulvhqnftrzweiczvdbexhogsdgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503656.236127-1686-15792691023497/AnsiballZ_file.py'
Jan 27 08:47:36 compute-0 sudo[236270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:36 compute-0 python3.9[236272]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:36 compute-0 sudo[236270]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:36 compute-0 ceph-mon[74357]: pgmap v704: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:37 compute-0 sudo[236423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqxnwvqybkmrsrewgntjlviyzbtjgqcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503656.9106228-1686-272568639206458/AnsiballZ_file.py'
Jan 27 08:47:37 compute-0 sudo[236423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:37 compute-0 python3.9[236425]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:37 compute-0 sudo[236423]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:37 compute-0 sudo[236575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqgczrwobrlvglohseylmgjtfghlrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503657.5012045-1686-271028735951392/AnsiballZ_file.py'
Jan 27 08:47:37 compute-0 sudo[236575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:37.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:38 compute-0 python3.9[236577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:38 compute-0 sudo[236575]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:38 compute-0 sudo[236727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnwrkrkrwvbgxphvqyhhmylasntjdmtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503658.2039094-1686-85181682185603/AnsiballZ_file.py'
Jan 27 08:47:38 compute-0 sudo[236727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:38 compute-0 python3.9[236729]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:38 compute-0 sudo[236727]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:39.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:39 compute-0 ceph-mon[74357]: pgmap v705: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:39.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:40 compute-0 sudo[236880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tniueukskorhcujuvwcmfsqxbwqekrhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503658.971081-1686-146437261798551/AnsiballZ_file.py'
Jan 27 08:47:40 compute-0 sudo[236880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:40 compute-0 python3.9[236882]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:47:40 compute-0 sudo[236880]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:40 compute-0 sudo[236907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:40 compute-0 sudo[236907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:40 compute-0 sudo[236907]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:40 compute-0 sudo[236932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:40 compute-0 sudo[236932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:40 compute-0 sudo[236932]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:40 compute-0 ceph-mon[74357]: pgmap v706: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:40 compute-0 sudo[237082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmnissbythywyxtzimkwpnttzyosovk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503660.598007-1860-167622088476828/AnsiballZ_command.py'
Jan 27 08:47:40 compute-0 sudo[237082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:41 compute-0 python3.9[237084]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:41 compute-0 sudo[237082]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:41.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:42 compute-0 python3.9[237237]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:47:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:42 compute-0 ceph-mon[74357]: pgmap v707: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:43 compute-0 sudo[237387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzcqhpwecynkaiyaqgowtkqswbspauq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503662.6684704-1914-124937889067560/AnsiballZ_systemd_service.py'
Jan 27 08:47:43 compute-0 sudo[237387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:43 compute-0 python3.9[237389]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:47:43 compute-0 systemd[1]: Reloading.
Jan 27 08:47:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:43 compute-0 systemd-rc-local-generator[237417]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:47:43 compute-0 systemd-sysv-generator[237421]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:47:43 compute-0 sudo[237387]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:43.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:44 compute-0 sudo[237575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guheiceygihsucbabpazxplcfodmrnzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503663.961671-1938-244529043215014/AnsiballZ_command.py'
Jan 27 08:47:44 compute-0 sudo[237575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:44 compute-0 python3.9[237577]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:44 compute-0 sudo[237575]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:44 compute-0 ceph-mon[74357]: pgmap v708: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:45 compute-0 sudo[237728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmcudinbtyqrcwqfzgjchtpiinxzbok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503664.6222098-1938-257187964755/AnsiballZ_command.py'
Jan 27 08:47:45 compute-0 sudo[237728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:45 compute-0 python3.9[237731]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:45.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:45.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:46 compute-0 sudo[237728]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:46 compute-0 sudo[237882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujbqrtpvlixzkatkrcoudtgyixespkxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503666.4787414-1938-215289198726136/AnsiballZ_command.py'
Jan 27 08:47:46 compute-0 sudo[237882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:46 compute-0 ceph-mon[74357]: pgmap v709: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:46 compute-0 python3.9[237884]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:46 compute-0 sudo[237882]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:47 compute-0 sudo[238036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egocdzvzoogwirgfqptvycflseurfbea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503667.099669-1938-266150050231866/AnsiballZ_command.py'
Jan 27 08:47:47 compute-0 sudo[238036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:47 compute-0 python3.9[238038]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:47 compute-0 sudo[238036]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.657633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503667657673, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1189, "num_deletes": 256, "total_data_size": 2098235, "memory_usage": 2133280, "flush_reason": "Manual Compaction"}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503667671642, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2076171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15737, "largest_seqno": 16925, "table_properties": {"data_size": 2070466, "index_size": 3100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11167, "raw_average_key_size": 18, "raw_value_size": 2059182, "raw_average_value_size": 3431, "num_data_blocks": 140, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503543, "oldest_key_time": 1769503543, "file_creation_time": 1769503667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14055 microseconds, and 6570 cpu microseconds.
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.671688) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2076171 bytes OK
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.671706) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.673069) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.673088) EVENT_LOG_v1 {"time_micros": 1769503667673082, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.673105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2093028, prev total WAL file size 2093028, number of live WAL files 2.
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.673940) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2027KB)], [35(7814KB)]
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503667673968, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10077822, "oldest_snapshot_seqno": -1}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4267 keys, 9702670 bytes, temperature: kUnknown
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503667733075, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9702670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9671418, "index_size": 19494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 106269, "raw_average_key_size": 24, "raw_value_size": 9591336, "raw_average_value_size": 2247, "num_data_blocks": 815, "num_entries": 4267, "num_filter_entries": 4267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.733295) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9702670 bytes
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.734914) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.3 rd, 164.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 4794, records dropped: 527 output_compression: NoCompression
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.734941) EVENT_LOG_v1 {"time_micros": 1769503667734929, "job": 16, "event": "compaction_finished", "compaction_time_micros": 59171, "compaction_time_cpu_micros": 22459, "output_level": 6, "num_output_files": 1, "total_output_size": 9702670, "num_input_records": 4794, "num_output_records": 4267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503667735447, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503667736811, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.673848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.736869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.736874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.736876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.736878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:47 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:47.736880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:47.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:47.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:48 compute-0 sudo[238189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpfjrnuqnkrmuqgdfmivqppjynbdtwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503667.750823-1938-113723809506904/AnsiballZ_command.py'
Jan 27 08:47:48 compute-0 sudo[238189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:48 compute-0 python3.9[238191]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:48 compute-0 ceph-mon[74357]: pgmap v710: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:49 compute-0 sudo[238189]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:49 compute-0 sudo[238343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzjfujiqepjkaxjxulugvhqiwtmdxkjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503669.4468768-1938-36234812965994/AnsiballZ_command.py'
Jan 27 08:47:49 compute-0 sudo[238343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:49 compute-0 python3.9[238345]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:50 compute-0 sudo[238343]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:50 compute-0 sudo[238496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vecnkjulxcbuebbcszlwqifwioaeiiup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503670.1576304-1938-154494561740534/AnsiballZ_command.py'
Jan 27 08:47:50 compute-0 sudo[238496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:50 compute-0 podman[238498]: 2026-01-27 08:47:50.596825431 +0000 UTC m=+0.109290950 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:47:50 compute-0 python3.9[238499]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:50 compute-0 sudo[238496]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:50 compute-0 sudo[238573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:50 compute-0 sudo[238573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:50 compute-0 sudo[238573]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:51 compute-0 sudo[238632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:47:51 compute-0 sudo[238632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:51 compute-0 sudo[238632]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:51 compute-0 ceph-mon[74357]: pgmap v711: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:51 compute-0 sudo[238685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:51 compute-0 sudo[238685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:51 compute-0 sudo[238685]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:51 compute-0 sudo[238774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flnekruhtzqsbcsumxhhpcffijwcfbfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503670.860223-1938-230760803940627/AnsiballZ_command.py'
Jan 27 08:47:51 compute-0 sudo[238774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:51 compute-0 sudo[238733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:47:51 compute-0 sudo[238733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:51 compute-0 python3.9[238776]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:47:51 compute-0 sudo[238774]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:51 compute-0 sudo[238733]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:47:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:47:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:47:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:47:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:47:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:47:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 90ce3783-9f3f-4733-afeb-f7f9c37e921c does not exist
Jan 27 08:47:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev ee9905bc-4970-4bee-9e21-fb2832ac3ed5 does not exist
Jan 27 08:47:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9ac7c5de-84b8-439e-b679-0195ef00e3ed does not exist
Jan 27 08:47:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:47:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:47:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:47:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:47:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:47:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:47:51 compute-0 sudo[238834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:51 compute-0 sudo[238834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:51 compute-0 sudo[238834]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:51 compute-0 sudo[238859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:47:51 compute-0 sudo[238859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:51 compute-0 sudo[238859]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:52 compute-0 sudo[238884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:52 compute-0 sudo[238884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:52 compute-0 sudo[238884]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:47:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:47:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:47:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:47:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:47:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:47:52 compute-0 sudo[238909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:47:52 compute-0 sudo[238909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.540480183 +0000 UTC m=+0.053263272 container create 62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:47:52 compute-0 systemd[1]: Started libpod-conmon-62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541.scope.
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.510694696 +0000 UTC m=+0.023477825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:47:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:47:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.647292533 +0000 UTC m=+0.160075602 container init 62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_murdock, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.65518849 +0000 UTC m=+0.167971579 container start 62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.659268391 +0000 UTC m=+0.172051470 container attach 62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:47:52 compute-0 pedantic_murdock[239015]: 167 167
Jan 27 08:47:52 compute-0 systemd[1]: libpod-62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541.scope: Deactivated successfully.
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.663455497 +0000 UTC m=+0.176238626 container died 62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-42ec934ba4b9380b1dc5f09299514c240b3fe6c9412f5747b05d29ceff076b9f-merged.mount: Deactivated successfully.
Jan 27 08:47:52 compute-0 podman[238976]: 2026-01-27 08:47:52.714062335 +0000 UTC m=+0.226845424 container remove 62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_murdock, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:47:52 compute-0 systemd[1]: libpod-conmon-62d0609c1bc845508c849ab092dd47d3c7cad9154fc23d0b0952ef7d0996d541.scope: Deactivated successfully.
Jan 27 08:47:52 compute-0 podman[239108]: 2026-01-27 08:47:52.870443165 +0000 UTC m=+0.044442950 container create 7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:47:52 compute-0 systemd[1]: Started libpod-conmon-7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24.scope.
Jan 27 08:47:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:47:52 compute-0 sudo[239157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mciuoleokvdrmmpkcwzxesfmvgegmtcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503672.6062171-2145-70085815264888/AnsiballZ_file.py'
Jan 27 08:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27843e6456952b54185780adf596f07741c3cd6a49976fbf4444778c398c7ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:52 compute-0 sudo[239157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27843e6456952b54185780adf596f07741c3cd6a49976fbf4444778c398c7ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27843e6456952b54185780adf596f07741c3cd6a49976fbf4444778c398c7ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27843e6456952b54185780adf596f07741c3cd6a49976fbf4444778c398c7ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27843e6456952b54185780adf596f07741c3cd6a49976fbf4444778c398c7ca9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:52 compute-0 podman[239108]: 2026-01-27 08:47:52.945921345 +0000 UTC m=+0.119921140 container init 7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chebyshev, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:47:52 compute-0 podman[239108]: 2026-01-27 08:47:52.853866301 +0000 UTC m=+0.027866096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:47:52 compute-0 podman[239108]: 2026-01-27 08:47:52.956459745 +0000 UTC m=+0.130459520 container start 7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 08:47:52 compute-0 podman[239108]: 2026-01-27 08:47:52.965139943 +0000 UTC m=+0.139139728 container attach 7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:47:53 compute-0 ceph-mon[74357]: pgmap v712: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:53 compute-0 python3.9[239160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:53 compute-0 sudo[239157]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:53 compute-0 sudo[239314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqimlgcnurgrhuujnannejuvgmgntjbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503673.3356006-2145-46838378638029/AnsiballZ_file.py'
Jan 27 08:47:53 compute-0 sudo[239314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:53 compute-0 hardcore_chebyshev[239155]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:47:53 compute-0 hardcore_chebyshev[239155]: --> relative data size: 1.0
Jan 27 08:47:53 compute-0 hardcore_chebyshev[239155]: --> All data devices are unavailable
Jan 27 08:47:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:53 compute-0 systemd[1]: libpod-7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24.scope: Deactivated successfully.
Jan 27 08:47:53 compute-0 podman[239108]: 2026-01-27 08:47:53.763009502 +0000 UTC m=+0.937009327 container died 7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chebyshev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-27843e6456952b54185780adf596f07741c3cd6a49976fbf4444778c398c7ca9-merged.mount: Deactivated successfully.
Jan 27 08:47:53 compute-0 podman[239108]: 2026-01-27 08:47:53.812940572 +0000 UTC m=+0.986940347 container remove 7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:47:53 compute-0 systemd[1]: libpod-conmon-7fbf0d829a94c119a83375d4229e82b169a23eb6d06a09c90ec57caee36b8e24.scope: Deactivated successfully.
Jan 27 08:47:53 compute-0 python3.9[239318]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:53 compute-0 sudo[238909]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:53 compute-0 sudo[239314]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:53 compute-0 sudo[239340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:53 compute-0 sudo[239340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:53 compute-0 sudo[239340]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:53 compute-0 sudo[239370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:47:53 compute-0 sudo[239370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:53 compute-0 sudo[239370]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:54 compute-0 sudo[239414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:54 compute-0 sudo[239414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:54 compute-0 sudo[239414]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:54 compute-0 sudo[239467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:47:54 compute-0 sudo[239467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:47:54.230 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:47:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:47:54.231 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:47:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:47:54.232 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:47:54 compute-0 sudo[239602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apnjfjkyksdyesiyeekeeqeinucosjbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503674.0000074-2145-221084146230622/AnsiballZ_file.py'
Jan 27 08:47:54 compute-0 sudo[239602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:54 compute-0 podman[239631]: 2026-01-27 08:47:54.411665177 +0000 UTC m=+0.034019215 container create 1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:47:54 compute-0 systemd[1]: Started libpod-conmon-1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae.scope.
Jan 27 08:47:54 compute-0 python3.9[239609]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:47:54 compute-0 podman[239631]: 2026-01-27 08:47:54.39723588 +0000 UTC m=+0.019589938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:47:54 compute-0 sudo[239602]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:54 compute-0 podman[239631]: 2026-01-27 08:47:54.50544733 +0000 UTC m=+0.127801468 container init 1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:47:54 compute-0 podman[239631]: 2026-01-27 08:47:54.511739402 +0000 UTC m=+0.134093460 container start 1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 08:47:54 compute-0 podman[239631]: 2026-01-27 08:47:54.51528491 +0000 UTC m=+0.137639038 container attach 1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:47:54 compute-0 modest_zhukovsky[239648]: 167 167
Jan 27 08:47:54 compute-0 systemd[1]: libpod-1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae.scope: Deactivated successfully.
Jan 27 08:47:54 compute-0 podman[239653]: 2026-01-27 08:47:54.552540512 +0000 UTC m=+0.026420317 container died 1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82d1fb913cf0927abbca3401aa38152507d5dcc8f9524ecaa2f82a2b7807115-merged.mount: Deactivated successfully.
Jan 27 08:47:54 compute-0 podman[239653]: 2026-01-27 08:47:54.597340451 +0000 UTC m=+0.071220266 container remove 1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:47:54 compute-0 systemd[1]: libpod-conmon-1371e9360888764449165bc8f34be32b36310a130ff0108ac72fa543f4e5a6ae.scope: Deactivated successfully.
Jan 27 08:47:54 compute-0 podman[239745]: 2026-01-27 08:47:54.784993799 +0000 UTC m=+0.043681500 container create 25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:47:54 compute-0 systemd[1]: Started libpod-conmon-25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885.scope.
Jan 27 08:47:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f0332d0cf1414eed66cfcff88e548501b3224f53b6ccb2c222926d5c1fbf7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f0332d0cf1414eed66cfcff88e548501b3224f53b6ccb2c222926d5c1fbf7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f0332d0cf1414eed66cfcff88e548501b3224f53b6ccb2c222926d5c1fbf7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f0332d0cf1414eed66cfcff88e548501b3224f53b6ccb2c222926d5c1fbf7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:54 compute-0 podman[239745]: 2026-01-27 08:47:54.766258675 +0000 UTC m=+0.024946386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:47:54 compute-0 podman[239745]: 2026-01-27 08:47:54.870288198 +0000 UTC m=+0.128975899 container init 25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:47:54 compute-0 podman[239745]: 2026-01-27 08:47:54.878814182 +0000 UTC m=+0.137501883 container start 25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hopper, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:47:54 compute-0 podman[239745]: 2026-01-27 08:47:54.883634205 +0000 UTC m=+0.142321916 container attach 25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:47:54 compute-0 sudo[239845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peszeqamorzztkbszpumxkkrmhruykpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503674.6869304-2211-2649887314083/AnsiballZ_file.py'
Jan 27 08:47:54 compute-0 sudo[239845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:55 compute-0 ceph-mon[74357]: pgmap v713: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.122659) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503675122783, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 333, "num_deletes": 251, "total_data_size": 149417, "memory_usage": 156920, "flush_reason": "Manual Compaction"}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503675128033, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 148002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16926, "largest_seqno": 17258, "table_properties": {"data_size": 145895, "index_size": 271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5458, "raw_average_key_size": 18, "raw_value_size": 141665, "raw_average_value_size": 485, "num_data_blocks": 12, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503667, "oldest_key_time": 1769503667, "file_creation_time": 1769503675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5402 microseconds, and 2215 cpu microseconds.
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.128083) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 148002 bytes OK
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.128104) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.131012) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.131025) EVENT_LOG_v1 {"time_micros": 1769503675131021, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.131043) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 147123, prev total WAL file size 147123, number of live WAL files 2.
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.131506) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(144KB)], [38(9475KB)]
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503675131574, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9850672, "oldest_snapshot_seqno": -1}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4047 keys, 7819187 bytes, temperature: kUnknown
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503675176243, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7819187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7790964, "index_size": 16977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 102397, "raw_average_key_size": 25, "raw_value_size": 7716329, "raw_average_value_size": 1906, "num_data_blocks": 700, "num_entries": 4047, "num_filter_entries": 4047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.176665) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7819187 bytes
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.179364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.9 rd, 174.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.3 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(119.4) write-amplify(52.8) OK, records in: 4559, records dropped: 512 output_compression: NoCompression
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.179397) EVENT_LOG_v1 {"time_micros": 1769503675179381, "job": 18, "event": "compaction_finished", "compaction_time_micros": 44801, "compaction_time_cpu_micros": 18502, "output_level": 6, "num_output_files": 1, "total_output_size": 7819187, "num_input_records": 4559, "num_output_records": 4047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503675179646, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503675182476, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.131357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.182770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.182777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.182778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.182780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:47:55.182781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:47:55 compute-0 python3.9[239847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:55 compute-0 sudo[239845]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:55 compute-0 sudo[240002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppmfppsowejnuxpvfztzemhsvnkzzlcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503675.3753142-2211-252424989031993/AnsiballZ_file.py'
Jan 27 08:47:55 compute-0 zealous_hopper[239790]: {
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:     "0": [
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:         {
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "devices": [
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "/dev/loop3"
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             ],
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "lv_name": "ceph_lv0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "lv_size": "7511998464",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "name": "ceph_lv0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "tags": {
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.cluster_name": "ceph",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.crush_device_class": "",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.encrypted": "0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.osd_id": "0",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.type": "block",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:                 "ceph.vdo": "0"
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             },
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "type": "block",
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:             "vg_name": "ceph_vg0"
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:         }
Jan 27 08:47:55 compute-0 sudo[240002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:55 compute-0 zealous_hopper[239790]:     ]
Jan 27 08:47:55 compute-0 zealous_hopper[239790]: }
Jan 27 08:47:55 compute-0 systemd[1]: libpod-25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885.scope: Deactivated successfully.
Jan 27 08:47:55 compute-0 podman[239745]: 2026-01-27 08:47:55.69340959 +0000 UTC m=+0.952097281 container died 25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-42f0332d0cf1414eed66cfcff88e548501b3224f53b6ccb2c222926d5c1fbf7a-merged.mount: Deactivated successfully.
Jan 27 08:47:55 compute-0 podman[239745]: 2026-01-27 08:47:55.748976785 +0000 UTC m=+1.007664476 container remove 25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:47:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:55.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:55 compute-0 systemd[1]: libpod-conmon-25c131612489865f332c7a1d2da1737c198a25e368f708bd0439a856d7f9f885.scope: Deactivated successfully.
Jan 27 08:47:55 compute-0 sudo[239467]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:55 compute-0 sudo[240019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:55 compute-0 sudo[240019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:55 compute-0 sudo[240019]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:55 compute-0 python3.9[240004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:55 compute-0 sudo[240002]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:55 compute-0 sudo[240044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:47:55 compute-0 sudo[240044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:55 compute-0 sudo[240044]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:55 compute-0 sudo[240069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:55 compute-0 sudo[240069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:55 compute-0 sudo[240069]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:56 compute-0 sudo[240117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:47:56 compute-0 sudo[240117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:56 compute-0 sudo[240315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoenhcgcovhgzqayqhbusbpinawgjtly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503676.089798-2211-87965281805607/AnsiballZ_file.py'
Jan 27 08:47:56 compute-0 sudo[240315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.374182986 +0000 UTC m=+0.045314974 container create e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:47:56 compute-0 systemd[1]: Started libpod-conmon-e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4.scope.
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.358770894 +0000 UTC m=+0.029902882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:47:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.501097558 +0000 UTC m=+0.172229546 container init e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.511503684 +0000 UTC m=+0.182635662 container start e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.514839955 +0000 UTC m=+0.185971943 container attach e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:47:56 compute-0 xenodochial_chatterjee[240327]: 167 167
Jan 27 08:47:56 compute-0 systemd[1]: libpod-e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4.scope: Deactivated successfully.
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.517789826 +0000 UTC m=+0.188921814 container died e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 08:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f723ac84e1b2d22a5a09487ebbe7ea0ad44200be2e12b80e17552707a6bb3fce-merged.mount: Deactivated successfully.
Jan 27 08:47:56 compute-0 podman[240293]: 2026-01-27 08:47:56.550536294 +0000 UTC m=+0.221668282 container remove e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:47:56 compute-0 systemd[1]: libpod-conmon-e71f5b8fb6099a27adb62f230c0e415882a069e58c23684253d6bd17f2434dd4.scope: Deactivated successfully.
Jan 27 08:47:56 compute-0 python3.9[240324]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:56 compute-0 sudo[240315]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:56 compute-0 podman[240358]: 2026-01-27 08:47:56.733269137 +0000 UTC m=+0.048840260 container create 5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:47:56 compute-0 systemd[1]: Started libpod-conmon-5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d.scope.
Jan 27 08:47:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:47:56 compute-0 podman[240358]: 2026-01-27 08:47:56.713232618 +0000 UTC m=+0.028803771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb713ac38c0a2bd629dcf71fa59ccdc07a9bd89a32ae23e5c23d8a26308fc418/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb713ac38c0a2bd629dcf71fa59ccdc07a9bd89a32ae23e5c23d8a26308fc418/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb713ac38c0a2bd629dcf71fa59ccdc07a9bd89a32ae23e5c23d8a26308fc418/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb713ac38c0a2bd629dcf71fa59ccdc07a9bd89a32ae23e5c23d8a26308fc418/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:56 compute-0 podman[240358]: 2026-01-27 08:47:56.838809793 +0000 UTC m=+0.154380936 container init 5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:47:56 compute-0 podman[240358]: 2026-01-27 08:47:56.845624539 +0000 UTC m=+0.161195662 container start 5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:47:56 compute-0 podman[240358]: 2026-01-27 08:47:56.8485485 +0000 UTC m=+0.164119653 container attach 5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:47:57 compute-0 sudo[240522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfixroscaonbfvbmbgdyfnwvzeenvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503676.7659318-2211-11560180277629/AnsiballZ_file.py'
Jan 27 08:47:57 compute-0 sudo[240522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:57 compute-0 ceph-mon[74357]: pgmap v714: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:57 compute-0 python3.9[240524]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:57 compute-0 sudo[240522]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:47:57 compute-0 friendly_morse[240418]: {
Jan 27 08:47:57 compute-0 friendly_morse[240418]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:47:57 compute-0 friendly_morse[240418]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:47:57 compute-0 friendly_morse[240418]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:47:57 compute-0 friendly_morse[240418]:         "osd_id": 0,
Jan 27 08:47:57 compute-0 friendly_morse[240418]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:47:57 compute-0 friendly_morse[240418]:         "type": "bluestore"
Jan 27 08:47:57 compute-0 friendly_morse[240418]:     }
Jan 27 08:47:57 compute-0 friendly_morse[240418]: }
Jan 27 08:47:57 compute-0 sudo[240689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bclorznhowatdtrqtxzqkytbgjwmzbjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503677.4068542-2211-47947322241119/AnsiballZ_file.py'
Jan 27 08:47:57 compute-0 sudo[240689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:57 compute-0 systemd[1]: libpod-5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d.scope: Deactivated successfully.
Jan 27 08:47:57 compute-0 podman[240358]: 2026-01-27 08:47:57.709713255 +0000 UTC m=+1.025284398 container died 5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb713ac38c0a2bd629dcf71fa59ccdc07a9bd89a32ae23e5c23d8a26308fc418-merged.mount: Deactivated successfully.
Jan 27 08:47:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:57 compute-0 podman[240358]: 2026-01-27 08:47:57.761639079 +0000 UTC m=+1.077210203 container remove 5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:47:57 compute-0 systemd[1]: libpod-conmon-5dc59c7644b7f1b0cc288f49703989751d145a27d40ecbd7029da890e7a39d1d.scope: Deactivated successfully.
Jan 27 08:47:57 compute-0 sudo[240117]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:47:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:47:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:47:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:47:57 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3be8a632-22a5-47a9-8a19-17a82f471237 does not exist
Jan 27 08:47:57 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev bfd0cfb7-d262-4230-8360-ec9ddd4b156a does not exist
Jan 27 08:47:57 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev ec789d09-efbb-492d-a796-5f47f6756d03 does not exist
Jan 27 08:47:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:47:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:47:57 compute-0 sudo[240705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:47:57 compute-0 sudo[240705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:57 compute-0 sudo[240705]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:57 compute-0 python3.9[240693]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:57 compute-0 sudo[240689]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:57 compute-0 sudo[240730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:47:57 compute-0 sudo[240730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:47:57 compute-0 sudo[240730]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:58 compute-0 sudo[240904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timlnjmykunsjszodckfgbmohyfhdxtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503678.109849-2211-147421902565464/AnsiballZ_file.py'
Jan 27 08:47:58 compute-0 sudo[240904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:58 compute-0 python3.9[240906]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:58 compute-0 sudo[240904]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:58 compute-0 ceph-mon[74357]: pgmap v715: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:58 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:47:58 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:47:58 compute-0 sudo[241056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dngvfhvpkfknsrkwmxvycfslocagxwls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503678.742266-2211-75085628665429/AnsiballZ_file.py'
Jan 27 08:47:58 compute-0 sudo[241056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:47:59 compute-0 python3.9[241058]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:47:59 compute-0 sudo[241056]: pam_unix(sudo:session): session closed for user root
Jan 27 08:47:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:47:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:47:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:47:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:47:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:47:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:47:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:00 compute-0 podman[241084]: 2026-01-27 08:48:00.249982235 +0000 UTC m=+0.060109550 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:48:00 compute-0 sudo[241103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:00 compute-0 sudo[241103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:00 compute-0 sudo[241103]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:00 compute-0 sudo[241128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:00 compute-0 sudo[241128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:00 compute-0 sudo[241128]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:00 compute-0 ceph-mon[74357]: pgmap v716: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:02 compute-0 ceph-mon[74357]: pgmap v717: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:04 compute-0 ceph-mon[74357]: pgmap v718: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:05 compute-0 sudo[241280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevxltiugvwwgshxxzrdzdsqqlerwqey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503684.4692514-2536-249496821613336/AnsiballZ_getent.py'
Jan 27 08:48:05 compute-0 sudo[241280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:05 compute-0 python3.9[241282]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 27 08:48:05 compute-0 sudo[241280]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:05.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:05.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:06 compute-0 sudo[241434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxcltghfdieqdijmfejsntswddlufja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503685.4734192-2560-196332103158585/AnsiballZ_group.py'
Jan 27 08:48:06 compute-0 sudo[241434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:06 compute-0 python3.9[241436]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:48:06 compute-0 groupadd[241437]: group added to /etc/group: name=nova, GID=42436
Jan 27 08:48:06 compute-0 groupadd[241437]: group added to /etc/gshadow: name=nova
Jan 27 08:48:06 compute-0 groupadd[241437]: new group: name=nova, GID=42436
Jan 27 08:48:06 compute-0 sudo[241434]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:06 compute-0 ceph-mon[74357]: pgmap v719: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:07 compute-0 sudo[241592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpydwobkvwhnyyjclhybokmlcjkcxjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503686.5351703-2584-233649181756798/AnsiballZ_user.py'
Jan 27 08:48:07 compute-0 sudo[241592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:07 compute-0 python3.9[241594]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:48:07 compute-0 useradd[241597]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 27 08:48:07 compute-0 useradd[241597]: add 'nova' to group 'libvirt'
Jan 27 08:48:07 compute-0 useradd[241597]: add 'nova' to shadow group 'libvirt'
Jan 27 08:48:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:07 compute-0 sudo[241592]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:07.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:08 compute-0 sshd-session[241628]: Accepted publickey for zuul from 192.168.122.30 port 53080 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 08:48:08 compute-0 systemd-logind[799]: New session 51 of user zuul.
Jan 27 08:48:08 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 27 08:48:08 compute-0 sshd-session[241628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 08:48:08 compute-0 sshd-session[241631]: Received disconnect from 192.168.122.30 port 53080:11: disconnected by user
Jan 27 08:48:08 compute-0 sshd-session[241631]: Disconnected from user zuul 192.168.122.30 port 53080
Jan 27 08:48:08 compute-0 sshd-session[241628]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:48:08 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 27 08:48:08 compute-0 systemd-logind[799]: Session 51 logged out. Waiting for processes to exit.
Jan 27 08:48:08 compute-0 systemd-logind[799]: Removed session 51.
Jan 27 08:48:08 compute-0 ceph-mon[74357]: pgmap v720: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:09 compute-0 python3.9[241782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:09.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:09 compute-0 python3.9[241903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503688.806323-2659-155553897454876/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:10 compute-0 python3.9[242053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:10 compute-0 ceph-mon[74357]: pgmap v721: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:11 compute-0 python3.9[242129]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:11 compute-0 python3.9[242280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:11.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:12 compute-0 python3.9[242401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503691.2046564-2659-98051066459732/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:12 compute-0 python3.9[242551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:12 compute-0 ceph-mon[74357]: pgmap v722: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:13 compute-0 python3.9[242673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503692.393157-2659-169249020176923/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:13.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:13 compute-0 python3.9[242823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:14 compute-0 python3.9[242944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503693.5096974-2659-37180792135810/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:14 compute-0 ceph-mon[74357]: pgmap v723: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:48:14
Jan 27 08:48:14 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:48:14 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:48:14 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images']
Jan 27 08:48:14 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:15 compute-0 python3.9[243094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:48:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:15 compute-0 python3.9[243216]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503694.723425-2659-161646075504817/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:15.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:16 compute-0 sudo[243366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mherfpuresvgdlvaupgosnabqasorzbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503696.0094962-2908-47873607243455/AnsiballZ_file.py'
Jan 27 08:48:16 compute-0 sudo[243366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:16 compute-0 python3.9[243368]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:48:16 compute-0 sudo[243366]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:16 compute-0 ceph-mon[74357]: pgmap v724: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:17 compute-0 sudo[243519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klweujrsfiwwwzkmckpnbalubdrprajw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503696.772831-2932-222270026486706/AnsiballZ_copy.py'
Jan 27 08:48:17 compute-0 sudo[243519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:17 compute-0 python3.9[243521]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:48:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:17 compute-0 sudo[243519]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:17.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:18 compute-0 sudo[243671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygximyealcngbyszobwmonarabdavoyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503697.668829-2956-136169876399003/AnsiballZ_stat.py'
Jan 27 08:48:18 compute-0 sudo[243671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:18 compute-0 python3.9[243673]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:18 compute-0 sudo[243671]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:18 compute-0 sudo[243823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzfhyeoahbllmmnbzcemekpxxbyajmhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503698.4473393-2980-78022793177061/AnsiballZ_stat.py'
Jan 27 08:48:18 compute-0 sudo[243823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:18 compute-0 ceph-mon[74357]: pgmap v725: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:18 compute-0 python3.9[243825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:19 compute-0 sudo[243823]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:19 compute-0 sudo[243947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atpnsnkeyisykpwsqjexktkfhpxdnwsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503698.4473393-2980-78022793177061/AnsiballZ_copy.py'
Jan 27 08:48:19 compute-0 sudo[243947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:19 compute-0 python3.9[243949]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769503698.4473393-2980-78022793177061/.source _original_basename=.5uu_14tn follow=False checksum=a75024fcc7efc598a45fed5f992d0763eb84d5a2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 27 08:48:19 compute-0 sudo[243947]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:19.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:20 compute-0 python3.9[244101]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:20 compute-0 sudo[244128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:20 compute-0 sudo[244128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:20 compute-0 sudo[244128]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:20 compute-0 sudo[244157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:20 compute-0 sudo[244157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:20 compute-0 sudo[244157]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:20 compute-0 podman[244152]: 2026-01-27 08:48:20.905196199 +0000 UTC m=+0.126857811 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:48:21 compute-0 ceph-mon[74357]: pgmap v726: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:21 compute-0 python3.9[244329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:21.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:21 compute-0 python3.9[244450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503700.815772-3058-5259623689055/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:22 compute-0 python3.9[244600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:48:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:23 compute-0 ceph-mon[74357]: pgmap v727: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:23 compute-0 python3.9[244721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769503702.1466005-3103-41639187305901/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:48:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:23.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:23.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:24 compute-0 sudo[244872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxtzfngimqbzrmpkozvyxrkvlnffcxdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503703.6008523-3154-129087720603025/AnsiballZ_container_config_data.py'
Jan 27 08:48:24 compute-0 sudo[244872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:48:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:48:24 compute-0 python3.9[244874]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 27 08:48:24 compute-0 sudo[244872]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:25 compute-0 ceph-mon[74357]: pgmap v728: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:25 compute-0 sudo[245025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcqmotpykwxxoaoitfkypxfrhsgqfdpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503704.6240191-3187-108654539011901/AnsiballZ_container_config_hash.py'
Jan 27 08:48:25 compute-0 sudo[245025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:25 compute-0 python3.9[245027]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:48:25 compute-0 sudo[245025]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:25.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:25.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:26 compute-0 sudo[245177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uexyjxvhgrafeqpicbiktvpgoqamfpvr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503705.6779-3217-112001227897256/AnsiballZ_edpm_container_manage.py'
Jan 27 08:48:26 compute-0 sudo[245177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:26 compute-0 python3[245179]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:48:27 compute-0 ceph-mon[74357]: pgmap v729: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:27.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:29 compute-0 ceph-mon[74357]: pgmap v730: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:29.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:29.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:30 compute-0 ceph-mon[74357]: pgmap v731: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:31.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:31.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:32 compute-0 podman[245239]: 2026-01-27 08:48:32.43694263 +0000 UTC m=+1.245253603 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:48:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:33 compute-0 ceph-mon[74357]: pgmap v732: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:33.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:33.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:35 compute-0 ceph-mon[74357]: pgmap v733: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:35.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:36 compute-0 podman[245194]: 2026-01-27 08:48:36.286988963 +0000 UTC m=+9.718698565 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 08:48:36 compute-0 podman[245298]: 2026-01-27 08:48:36.443161436 +0000 UTC m=+0.054817614 container create 57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Jan 27 08:48:36 compute-0 podman[245298]: 2026-01-27 08:48:36.41923275 +0000 UTC m=+0.030888948 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 08:48:36 compute-0 python3[245179]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 27 08:48:36 compute-0 sudo[245177]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:36 compute-0 ceph-mon[74357]: pgmap v734: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:37 compute-0 sudo[245486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tavibtaupwjomtidkxgyjypxptyhhwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503716.795839-3241-88428429101830/AnsiballZ_stat.py'
Jan 27 08:48:37 compute-0 sudo[245486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:37 compute-0 python3.9[245489]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:37 compute-0 sudo[245486]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:38 compute-0 sudo[245641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxsgmvqqgmfjdiivhsrymfreehmvqod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503718.101494-3277-186594450327530/AnsiballZ_container_config_data.py'
Jan 27 08:48:38 compute-0 sudo[245641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:38 compute-0 python3.9[245643]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 27 08:48:38 compute-0 sudo[245641]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:39 compute-0 ceph-mon[74357]: pgmap v735: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:39 compute-0 sudo[245794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-splxmavhakkcabwwcaoliwzzvvlarjga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503719.1104517-3310-178927330482741/AnsiballZ_container_config_hash.py'
Jan 27 08:48:39 compute-0 sudo[245794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:39 compute-0 python3.9[245796]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:48:39 compute-0 sudo[245794]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:39.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:39.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:40 compute-0 sudo[245946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvuoxijyxuzfvpcxavqgxxedlgfrcyzo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769503720.2466335-3340-197508797982117/AnsiballZ_edpm_container_manage.py'
Jan 27 08:48:40 compute-0 sudo[245946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:40 compute-0 python3[245948]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:48:40 compute-0 sudo[245956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:40 compute-0 sudo[245956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:40 compute-0 sudo[245956]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:40 compute-0 sudo[245998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:40 compute-0 sudo[245998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:40 compute-0 sudo[245998]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:41 compute-0 podman[246033]: 2026-01-27 08:48:41.033527759 +0000 UTC m=+0.052296106 container create 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 08:48:41 compute-0 podman[246033]: 2026-01-27 08:48:41.0043748 +0000 UTC m=+0.023143167 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 08:48:41 compute-0 python3[245948]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 27 08:48:41 compute-0 ceph-mon[74357]: pgmap v736: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:41 compute-0 sudo[245946]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:41 compute-0 sudo[246224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfkbgppckdbnrqzmgzeozufgwjzvdahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503721.4490674-3364-162170145242164/AnsiballZ_stat.py'
Jan 27 08:48:41 compute-0 sudo[246224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:41.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:41 compute-0 python3.9[246226]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:41 compute-0 sudo[246224]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:41.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:42 compute-0 sudo[246378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufyjvwrgjgujzfjqoydbtooguiyvesab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503722.4012728-3391-17307042686734/AnsiballZ_file.py'
Jan 27 08:48:42 compute-0 sudo[246378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:42 compute-0 python3.9[246380]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:48:42 compute-0 sudo[246378]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:43 compute-0 ceph-mon[74357]: pgmap v737: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:43 compute-0 sudo[246530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdnozbyrspqjqiatvfvleupjgannyfcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503722.9736106-3391-71308056081773/AnsiballZ_copy.py'
Jan 27 08:48:43 compute-0 sudo[246530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:43 compute-0 python3.9[246532]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769503722.9736106-3391-71308056081773/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:48:43 compute-0 sudo[246530]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:43.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:43 compute-0 sudo[246606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jogplfaiopgbedxmkkuqaslfylhgjmbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503722.9736106-3391-71308056081773/AnsiballZ_systemd.py'
Jan 27 08:48:43 compute-0 sudo[246606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:43.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:44 compute-0 python3.9[246608]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:48:44 compute-0 systemd[1]: Reloading.
Jan 27 08:48:44 compute-0 systemd-rc-local-generator[246631]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:48:44 compute-0 systemd-sysv-generator[246641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:48:44 compute-0 sudo[246606]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:44 compute-0 sudo[246717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anvfqixorrekkmxhpnlvxwrsnuvzgodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503722.9736106-3391-71308056081773/AnsiballZ_systemd.py'
Jan 27 08:48:44 compute-0 sudo[246717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:45 compute-0 python3.9[246719]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:48:45 compute-0 ceph-mon[74357]: pgmap v738: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:45 compute-0 systemd[1]: Reloading.
Jan 27 08:48:45 compute-0 systemd-sysv-generator[246753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:48:45 compute-0 systemd-rc-local-generator[246750]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:48:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:45 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 08:48:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:45 compute-0 podman[246760]: 2026-01-27 08:48:45.62059758 +0000 UTC m=+0.094920114 container init 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, managed_by=edpm_ansible)
Jan 27 08:48:45 compute-0 podman[246760]: 2026-01-27 08:48:45.627011497 +0000 UTC m=+0.101334021 container start 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 27 08:48:45 compute-0 podman[246760]: nova_compute
Jan 27 08:48:45 compute-0 nova_compute[246774]: + sudo -E kolla_set_configs
Jan 27 08:48:45 compute-0 systemd[1]: Started nova_compute container.
Jan 27 08:48:45 compute-0 sudo[246717]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Validating config file
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying service configuration files
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Deleting /etc/ceph
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Creating directory /etc/ceph
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Writing out command to execute
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:45 compute-0 nova_compute[246774]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:48:45 compute-0 nova_compute[246774]: ++ cat /run_command
Jan 27 08:48:45 compute-0 nova_compute[246774]: + CMD=nova-compute
Jan 27 08:48:45 compute-0 nova_compute[246774]: + ARGS=
Jan 27 08:48:45 compute-0 nova_compute[246774]: + sudo kolla_copy_cacerts
Jan 27 08:48:45 compute-0 nova_compute[246774]: + [[ ! -n '' ]]
Jan 27 08:48:45 compute-0 nova_compute[246774]: + . kolla_extend_start
Jan 27 08:48:45 compute-0 nova_compute[246774]: Running command: 'nova-compute'
Jan 27 08:48:45 compute-0 nova_compute[246774]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 08:48:45 compute-0 nova_compute[246774]: + umask 0022
Jan 27 08:48:45 compute-0 nova_compute[246774]: + exec nova-compute
Jan 27 08:48:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:45.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:45.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:47 compute-0 python3.9[246937]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:47 compute-0 ceph-mon[74357]: pgmap v739: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.097 246779 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.097 246779 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.097 246779 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.097 246779 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 08:48:48 compute-0 python3.9[247088]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.241 246779 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.272 246779 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:48:48 compute-0 nova_compute[246774]: 2026-01-27 08:48:48.273 246779 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 08:48:49 compute-0 python3.9[247242]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.103 246779 INFO nova.virt.driver [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 08:48:49 compute-0 ceph-mon[74357]: pgmap v740: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.263 246779 INFO nova.compute.provider_config [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.286 246779 DEBUG oslo_concurrency.lockutils [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.286 246779 DEBUG oslo_concurrency.lockutils [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.286 246779 DEBUG oslo_concurrency.lockutils [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.287 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.287 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.287 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.287 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.287 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.288 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.289 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.289 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.289 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.289 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.289 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.290 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.290 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.290 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.290 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.290 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.290 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.291 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.291 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.291 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.291 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.291 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.291 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.292 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.292 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.292 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.292 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.292 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.293 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.293 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.293 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.293 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.293 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.294 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.294 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.294 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.294 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.294 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.295 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.295 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.295 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.295 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.295 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.295 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.296 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.296 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.296 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.296 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.296 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.296 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.297 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.297 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.297 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.297 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.297 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.298 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.299 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.300 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.301 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.301 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.301 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.301 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.301 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.301 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.302 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.303 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.303 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.303 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.303 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.303 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.303 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.304 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.304 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.304 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.304 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.304 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.304 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.305 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.306 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.307 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.307 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.307 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.307 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.307 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.307 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.308 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.308 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.308 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.308 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.309 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.309 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.309 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.309 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.309 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.309 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.310 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.310 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.310 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.310 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.310 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.310 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.311 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.312 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.312 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.312 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.312 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.312 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.313 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.313 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.313 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.313 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.313 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.314 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.314 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.314 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.314 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.314 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.315 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.315 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.315 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.315 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.315 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.316 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.316 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.316 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.316 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.316 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.317 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.317 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.317 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.317 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.317 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.318 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.318 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.318 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.318 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.318 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.319 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.319 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.319 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.319 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.319 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.319 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.320 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.320 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.320 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.320 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.320 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.321 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.321 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.321 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.322 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.322 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.322 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.322 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.322 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.323 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.323 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.323 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.323 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.323 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.324 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.324 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.324 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.324 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.324 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.325 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.325 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.325 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.325 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.325 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.325 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.326 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.326 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.326 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.326 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.326 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.327 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.327 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.327 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.327 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.327 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.327 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.328 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.328 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.328 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.328 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.328 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.329 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.329 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.329 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.329 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.329 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.330 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.330 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.330 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.330 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.330 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.331 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.331 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.331 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.331 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.331 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.331 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.332 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.332 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.332 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.332 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.332 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.333 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.333 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.333 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.333 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.333 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.333 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.334 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.334 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.334 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.334 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.334 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.335 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.335 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.335 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.335 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.335 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.336 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.336 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.336 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.336 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.336 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.336 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.337 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.337 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.337 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.337 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.337 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.338 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.338 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.338 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.338 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.338 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.339 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.339 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.339 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.339 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.339 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.340 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.340 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.340 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.340 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.340 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.341 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.341 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.341 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.341 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.341 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.342 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.342 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.342 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.342 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.342 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.343 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.343 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.343 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.343 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.343 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.344 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.344 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.344 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.344 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.344 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.345 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.345 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.345 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.345 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.345 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.346 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.346 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.346 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.346 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.347 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.347 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.347 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.347 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.347 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.348 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.348 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.348 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.348 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.348 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.349 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.349 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.349 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.349 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.349 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.350 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.350 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.350 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.350 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.350 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.351 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.351 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.351 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.351 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.351 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.352 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.352 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.352 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.352 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.353 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.353 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.353 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.353 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.354 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.354 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.354 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.354 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.355 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.355 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.355 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.355 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.355 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.355 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.356 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.356 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.356 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.356 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.356 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.357 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.357 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.357 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.357 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.358 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.358 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.358 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.358 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.358 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.359 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.359 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.359 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.359 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.359 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.360 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.360 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.360 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.360 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.360 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.361 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.361 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.361 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.361 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.361 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.362 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.362 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.362 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.362 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.362 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.363 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.363 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.363 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.363 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.363 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.364 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.364 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.364 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.364 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.364 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.365 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.365 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.365 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.365 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.365 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.366 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.366 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.366 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.366 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.366 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.367 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.367 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.367 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.367 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.367 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.368 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.368 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.368 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.368 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.368 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.369 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.369 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.369 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.369 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.369 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.370 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.370 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.371 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.371 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.372 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.372 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.372 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.373 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.373 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.373 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.374 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.374 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.374 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.375 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.375 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.375 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.375 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.375 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.376 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.376 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.376 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.376 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.376 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.376 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.377 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.377 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.377 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.377 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.377 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.378 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.378 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.378 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.378 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.378 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.379 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.379 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.379 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.379 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.379 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.380 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.380 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.380 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.380 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.380 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.380 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.381 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.381 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.381 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.381 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.381 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.382 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.382 246779 WARNING oslo_config.cfg [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 08:48:49 compute-0 nova_compute[246774]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 08:48:49 compute-0 nova_compute[246774]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 08:48:49 compute-0 nova_compute[246774]: and ``live_migration_inbound_addr`` respectively.
Jan 27 08:48:49 compute-0 nova_compute[246774]: ).  Its value may be silently ignored in the future.
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.382 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.382 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.383 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.383 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.383 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.383 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.383 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.383 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.384 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.384 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.384 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.384 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.384 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.384 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rbd_secret_uuid        = 281e9bde-2795-59f4-98ac-90cf5b49a2de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.385 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.386 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.387 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.387 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.387 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.387 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.387 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.387 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.388 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.388 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.388 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.388 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.388 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.388 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.389 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.389 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.389 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.389 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.389 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.389 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.390 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.390 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.390 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.390 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.390 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.390 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.391 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.391 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.391 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.391 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.391 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.391 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.392 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.393 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.394 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.395 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.395 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.395 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.395 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.395 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.395 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.396 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.396 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.396 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.396 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.396 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.396 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.397 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.398 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.399 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.400 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.401 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.402 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.403 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.403 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.403 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.403 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.403 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.403 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.404 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.405 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.405 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.405 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.405 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.405 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.405 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.406 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.407 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.408 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.409 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.409 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.409 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.409 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.409 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.409 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.410 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.411 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.412 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.412 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.412 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.412 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.412 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.412 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.413 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.414 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.415 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.415 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.415 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.415 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.415 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.415 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.416 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.417 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.418 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.419 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.419 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.419 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.419 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.419 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.419 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.420 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.421 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.422 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.423 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.423 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.423 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.423 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.423 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.423 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.424 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.425 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.426 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.427 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.427 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.427 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.427 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.427 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.427 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.428 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.429 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.430 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.431 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.431 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.431 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.431 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.431 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.431 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.432 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.433 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.433 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.433 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.433 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.433 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.433 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.434 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.435 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.436 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.437 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.438 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.439 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.439 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.439 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.439 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.439 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.439 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.440 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.441 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.442 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.443 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.444 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.445 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.445 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.445 246779 DEBUG oslo_service.service [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 08:48:49 compute-0 nova_compute[246774]: 2026-01-27 08:48:49.446 246779 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 08:48:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:50 compute-0 sudo[247393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psvamtwlnlzfxydablotfffvqmkczyse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503729.36982-3571-259199100603820/AnsiballZ_podman_container.py'
Jan 27 08:48:50 compute-0 sudo[247393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:50 compute-0 python3.9[247395]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 08:48:50 compute-0 sudo[247393]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:50 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:48:50 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:48:51 compute-0 ceph-mon[74357]: pgmap v741: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:51 compute-0 podman[247521]: 2026-01-27 08:48:51.308705748 +0000 UTC m=+0.121898096 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 08:48:51 compute-0 sudo[247595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oquyxizwnnxgknjysxpdrmuhtnfkporh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503730.845497-3595-114240095304863/AnsiballZ_systemd.py'
Jan 27 08:48:51 compute-0 sudo[247595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:51 compute-0 python3.9[247599]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:48:51 compute-0 systemd[1]: Stopping nova_compute container...
Jan 27 08:48:51 compute-0 nova_compute[246774]: 2026-01-27 08:48:51.736 246779 DEBUG oslo_concurrency.lockutils [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 08:48:51 compute-0 nova_compute[246774]: 2026-01-27 08:48:51.736 246779 DEBUG oslo_concurrency.lockutils [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 08:48:51 compute-0 nova_compute[246774]: 2026-01-27 08:48:51.736 246779 DEBUG oslo_concurrency.lockutils [None req-2c2c7274-4de2-4186-86ba-40e85191dde0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 08:48:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:52 compute-0 systemd[1]: libpod-76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223.scope: Deactivated successfully.
Jan 27 08:48:52 compute-0 systemd[1]: libpod-76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223.scope: Consumed 3.005s CPU time.
Jan 27 08:48:52 compute-0 podman[247604]: 2026-01-27 08:48:52.120645913 +0000 UTC m=+0.425314359 container died 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 08:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223-userdata-shm.mount: Deactivated successfully.
Jan 27 08:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4-merged.mount: Deactivated successfully.
Jan 27 08:48:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:53 compute-0 podman[247604]: 2026-01-27 08:48:53.015850542 +0000 UTC m=+1.320518988 container cleanup 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 08:48:53 compute-0 podman[247604]: nova_compute
Jan 27 08:48:53 compute-0 podman[247642]: nova_compute
Jan 27 08:48:53 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 27 08:48:53 compute-0 systemd[1]: Stopped nova_compute container.
Jan 27 08:48:53 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 08:48:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc864d3ec1f3f44d912affd18dcc6e9e9af4f44a693a728abb3e1c7941cd3b4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:53 compute-0 podman[247656]: 2026-01-27 08:48:53.23228004 +0000 UTC m=+0.102541704 container init 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 27 08:48:53 compute-0 podman[247656]: 2026-01-27 08:48:53.240201207 +0000 UTC m=+0.110462841 container start 76b9fb88f5e374cd718206da4110a10415d2904377ee3e3f53eb885f8f956223 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:48:53 compute-0 podman[247656]: nova_compute
Jan 27 08:48:53 compute-0 nova_compute[247671]: + sudo -E kolla_set_configs
Jan 27 08:48:53 compute-0 systemd[1]: Started nova_compute container.
Jan 27 08:48:53 compute-0 sudo[247595]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Validating config file
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying service configuration files
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 08:48:53 compute-0 ceph-mon[74357]: pgmap v742: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /etc/ceph
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Creating directory /etc/ceph
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Writing out command to execute
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:53 compute-0 nova_compute[247671]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:48:53 compute-0 nova_compute[247671]: ++ cat /run_command
Jan 27 08:48:53 compute-0 nova_compute[247671]: + CMD=nova-compute
Jan 27 08:48:53 compute-0 nova_compute[247671]: + ARGS=
Jan 27 08:48:53 compute-0 nova_compute[247671]: + sudo kolla_copy_cacerts
Jan 27 08:48:53 compute-0 nova_compute[247671]: + [[ ! -n '' ]]
Jan 27 08:48:53 compute-0 nova_compute[247671]: + . kolla_extend_start
Jan 27 08:48:53 compute-0 nova_compute[247671]: Running command: 'nova-compute'
Jan 27 08:48:53 compute-0 nova_compute[247671]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 08:48:53 compute-0 nova_compute[247671]: + umask 0022
Jan 27 08:48:53 compute-0 nova_compute[247671]: + exec nova-compute
Jan 27 08:48:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:48:54.232 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:48:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:48:54.232 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:48:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:48:54.232 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:48:54 compute-0 ceph-mon[74357]: pgmap v743: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:54 compute-0 sudo[247833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivqgadgjvrugiqdlhywqodejhdwxeaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769503734.4088707-3622-251722969730177/AnsiballZ_podman_container.py'
Jan 27 08:48:54 compute-0 sudo[247833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 08:48:55 compute-0 python3.9[247835]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 08:48:55 compute-0 systemd[1]: Started libpod-conmon-57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66.scope.
Jan 27 08:48:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7051839e0fdf4cc455124b230a6cb0c618b5b78393f8c2143167edabac4fd7e5/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7051839e0fdf4cc455124b230a6cb0c618b5b78393f8c2143167edabac4fd7e5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7051839e0fdf4cc455124b230a6cb0c618b5b78393f8c2143167edabac4fd7e5/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.251 247675 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.252 247675 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.252 247675 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.253 247675 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 08:48:55 compute-0 podman[247862]: 2026-01-27 08:48:55.265738015 +0000 UTC m=+0.125348129 container init 57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Jan 27 08:48:55 compute-0 podman[247862]: 2026-01-27 08:48:55.272273245 +0000 UTC m=+0.131883349 container start 57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 08:48:55 compute-0 python3.9[247835]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Applying nova statedir ownership
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 27 08:48:55 compute-0 nova_compute_init[247885]: INFO:nova_statedir:Nova statedir ownership complete
Jan 27 08:48:55 compute-0 systemd[1]: libpod-57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66.scope: Deactivated successfully.
Jan 27 08:48:55 compute-0 podman[247884]: 2026-01-27 08:48:55.339212422 +0000 UTC m=+0.032776581 container died 57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.392 247675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66-userdata-shm.mount: Deactivated successfully.
Jan 27 08:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7051839e0fdf4cc455124b230a6cb0c618b5b78393f8c2143167edabac4fd7e5-merged.mount: Deactivated successfully.
Jan 27 08:48:55 compute-0 podman[247896]: 2026-01-27 08:48:55.407024332 +0000 UTC m=+0.058070925 container cleanup 57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3)
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.412 247675 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:48:55 compute-0 nova_compute[247671]: 2026-01-27 08:48:55.413 247675 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 08:48:55 compute-0 systemd[1]: libpod-conmon-57a11ca5dca6fe56b47a3d52d0b80db0249e9a2a0b1ec166fbb3ad6ff284dc66.scope: Deactivated successfully.
Jan 27 08:48:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:55 compute-0 sudo[247833]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:55.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:56 compute-0 ceph-mon[74357]: pgmap v744: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:56 compute-0 sshd-session[222370]: Connection closed by 192.168.122.30 port 45636
Jan 27 08:48:56 compute-0 sshd-session[222367]: pam_unix(sshd:session): session closed for user zuul
Jan 27 08:48:56 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 27 08:48:56 compute-0 systemd[1]: session-50.scope: Consumed 2min 952ms CPU time.
Jan 27 08:48:56 compute-0 systemd-logind[799]: Session 50 logged out. Waiting for processes to exit.
Jan 27 08:48:56 compute-0 systemd-logind[799]: Removed session 50.
Jan 27 08:48:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:48:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:48:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:48:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:58 compute-0 nova_compute[247671]: 2026-01-27 08:48:58.241 247675 INFO nova.virt.driver [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 08:48:58 compute-0 sudo[247950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:58 compute-0 sudo[247950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:58 compute-0 sudo[247950]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:58 compute-0 nova_compute[247671]: 2026-01-27 08:48:58.378 247675 INFO nova.compute.provider_config [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 08:48:58 compute-0 sudo[247975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:48:58 compute-0 sudo[247975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:58 compute-0 sudo[247975]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:58 compute-0 sudo[248000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:58 compute-0 sudo[248000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:58 compute-0 sudo[248000]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:58 compute-0 ceph-mon[74357]: pgmap v745: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:58 compute-0 sudo[248025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 27 08:48:58 compute-0 sudo[248025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:58 compute-0 sudo[248025]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:48:58 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:48:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:48:58 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:48:59 compute-0 sudo[248070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:59 compute-0 sudo[248070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:59 compute-0 sudo[248070]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:59 compute-0 sudo[248096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:48:59 compute-0 sudo[248096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:59 compute-0 sudo[248096]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:59 compute-0 sudo[248121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:59 compute-0 sudo[248121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:59 compute-0 sudo[248121]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:59 compute-0 sudo[248146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:48:59 compute-0 sudo[248146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:48:59 compute-0 sudo[248146]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:48:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:48:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:48:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:48:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:48:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:48:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f44c0ec0-a62a-4fa7-be4f-a6f2068672b5 does not exist
Jan 27 08:48:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6db6c3a3-bd1b-43fb-8eda-1d3dba723a26 does not exist
Jan 27 08:48:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 481366a3-30bf-414b-ad8a-7e3e01d4ff64 does not exist
Jan 27 08:48:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:48:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:48:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:48:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:48:59 compute-0 sudo[248202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:48:59 compute-0 sudo[248202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:48:59 compute-0 sudo[248202]: pam_unix(sudo:session): session closed for user root
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:48:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:48:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:48:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:48:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:48:59.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:00 compute-0 sudo[248227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:49:00 compute-0 sudo[248227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:00 compute-0 sudo[248227]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:00 compute-0 sudo[248252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:00 compute-0 sudo[248252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:00 compute-0 sudo[248252]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:00 compute-0 sudo[248277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:49:00 compute-0 sudo[248277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.447601806 +0000 UTC m=+0.045052477 container create a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:49:00 compute-0 systemd[1]: Started libpod-conmon-a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9.scope.
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.422460497 +0000 UTC m=+0.019911148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:49:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.541666477 +0000 UTC m=+0.139117168 container init a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.554745195 +0000 UTC m=+0.152195826 container start a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.558241142 +0000 UTC m=+0.155691793 container attach a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:49:00 compute-0 gracious_nash[248359]: 167 167
Jan 27 08:49:00 compute-0 systemd[1]: libpod-a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9.scope: Deactivated successfully.
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.579197377 +0000 UTC m=+0.176648048 container died a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e567465dbd36c123572acb678eee0d67fda7e71527325c38f27c52cbfb9a67c1-merged.mount: Deactivated successfully.
Jan 27 08:49:00 compute-0 podman[248343]: 2026-01-27 08:49:00.621851777 +0000 UTC m=+0.219302408 container remove a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:49:00 compute-0 systemd[1]: libpod-conmon-a1b0d6f4d551c3ffc61773434418f22a88264f605273badf393dbfb7cb54b3b9.scope: Deactivated successfully.
Jan 27 08:49:00 compute-0 podman[248384]: 2026-01-27 08:49:00.815330564 +0000 UTC m=+0.043457172 container create c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:49:00 compute-0 systemd[1]: Started libpod-conmon-c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee.scope.
Jan 27 08:49:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2b32bcc87583ee6682810968b58ab1871cd03e1ed8e6ed0832eb259dd7e471/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2b32bcc87583ee6682810968b58ab1871cd03e1ed8e6ed0832eb259dd7e471/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2b32bcc87583ee6682810968b58ab1871cd03e1ed8e6ed0832eb259dd7e471/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2b32bcc87583ee6682810968b58ab1871cd03e1ed8e6ed0832eb259dd7e471/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2b32bcc87583ee6682810968b58ab1871cd03e1ed8e6ed0832eb259dd7e471/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:00 compute-0 podman[248384]: 2026-01-27 08:49:00.795648454 +0000 UTC m=+0.023775112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:49:00 compute-0 podman[248384]: 2026-01-27 08:49:00.904974244 +0000 UTC m=+0.133100872 container init c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:49:00 compute-0 podman[248384]: 2026-01-27 08:49:00.91975406 +0000 UTC m=+0.147880678 container start c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:49:00 compute-0 podman[248384]: 2026-01-27 08:49:00.923490422 +0000 UTC m=+0.151617060 container attach c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 08:49:00 compute-0 ceph-mon[74357]: pgmap v746: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:01 compute-0 sudo[248405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:01 compute-0 sudo[248405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:01 compute-0 sudo[248405]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:01 compute-0 sudo[248430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:01 compute-0 sudo[248430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:01 compute-0 sudo[248430]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:01 compute-0 xenodochial_visvesvaraya[248400]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:49:01 compute-0 xenodochial_visvesvaraya[248400]: --> relative data size: 1.0
Jan 27 08:49:01 compute-0 xenodochial_visvesvaraya[248400]: --> All data devices are unavailable
Jan 27 08:49:01 compute-0 systemd[1]: libpod-c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee.scope: Deactivated successfully.
Jan 27 08:49:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:01 compute-0 podman[248467]: 2026-01-27 08:49:01.836559551 +0000 UTC m=+0.036380049 container died c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 08:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd2b32bcc87583ee6682810968b58ab1871cd03e1ed8e6ed0832eb259dd7e471-merged.mount: Deactivated successfully.
Jan 27 08:49:01 compute-0 podman[248467]: 2026-01-27 08:49:01.884391423 +0000 UTC m=+0.084211901 container remove c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:49:01 compute-0 systemd[1]: libpod-conmon-c81c7445fe5b4415b5865da94292193b070514fe0a0eb927331112e85b8e7aee.scope: Deactivated successfully.
Jan 27 08:49:01 compute-0 sudo[248277]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:01.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:02 compute-0 sudo[248482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:02 compute-0 sudo[248482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:02 compute-0 sudo[248482]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:02 compute-0 sudo[248507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:49:02 compute-0 sudo[248507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:02 compute-0 sudo[248507]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:02 compute-0 sudo[248532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:02 compute-0 sudo[248532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:02 compute-0 sudo[248532]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:02 compute-0 sudo[248557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:49:02 compute-0 sudo[248557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.566319821 +0000 UTC m=+0.038465216 container create 2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:49:02 compute-0 systemd[1]: Started libpod-conmon-2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f.scope.
Jan 27 08:49:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:49:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.549382666 +0000 UTC m=+0.021528091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.64756428 +0000 UTC m=+0.119709705 container init 2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.657170094 +0000 UTC m=+0.129315519 container start 2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.661095291 +0000 UTC m=+0.133240686 container attach 2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:49:02 compute-0 quizzical_hellman[248640]: 167 167
Jan 27 08:49:02 compute-0 systemd[1]: libpod-2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f.scope: Deactivated successfully.
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.663532818 +0000 UTC m=+0.135678253 container died 2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:49:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e3d07f63459987625db919a75cc3228dc73b34610cbff04d0d1fa242e714be-merged.mount: Deactivated successfully.
Jan 27 08:49:02 compute-0 podman[248623]: 2026-01-27 08:49:02.706337222 +0000 UTC m=+0.178482617 container remove 2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:49:02 compute-0 systemd[1]: libpod-conmon-2fd1839706db599dbf3b6db5b68393ade35a3404d3986a3bb574b25323ed7e3f.scope: Deactivated successfully.
Jan 27 08:49:02 compute-0 podman[248663]: 2026-01-27 08:49:02.883969576 +0000 UTC m=+0.056448080 container create 900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:49:02 compute-0 systemd[1]: Started libpod-conmon-900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28.scope.
Jan 27 08:49:02 compute-0 podman[248663]: 2026-01-27 08:49:02.85714738 +0000 UTC m=+0.029625894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:49:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986ced0276beb41d9f2dc1551f84ff1d60eea1ae56b53cb07aed3a2a4c8590f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986ced0276beb41d9f2dc1551f84ff1d60eea1ae56b53cb07aed3a2a4c8590f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986ced0276beb41d9f2dc1551f84ff1d60eea1ae56b53cb07aed3a2a4c8590f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986ced0276beb41d9f2dc1551f84ff1d60eea1ae56b53cb07aed3a2a4c8590f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:02 compute-0 podman[248663]: 2026-01-27 08:49:02.979101376 +0000 UTC m=+0.151579900 container init 900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:49:02 compute-0 podman[248663]: 2026-01-27 08:49:02.991563387 +0000 UTC m=+0.164041881 container start 900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:49:02 compute-0 podman[248663]: 2026-01-27 08:49:02.995547847 +0000 UTC m=+0.168026331 container attach 900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:49:03 compute-0 ceph-mon[74357]: pgmap v747: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]: {
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:     "0": [
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:         {
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "devices": [
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "/dev/loop3"
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             ],
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "lv_name": "ceph_lv0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "lv_size": "7511998464",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "name": "ceph_lv0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "tags": {
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.cluster_name": "ceph",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.crush_device_class": "",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.encrypted": "0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.osd_id": "0",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.type": "block",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:                 "ceph.vdo": "0"
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             },
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "type": "block",
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:             "vg_name": "ceph_vg0"
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:         }
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]:     ]
Jan 27 08:49:03 compute-0 magical_ishizaka[248680]: }
Jan 27 08:49:03 compute-0 systemd[1]: libpod-900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28.scope: Deactivated successfully.
Jan 27 08:49:03 compute-0 podman[248663]: 2026-01-27 08:49:03.758323523 +0000 UTC m=+0.930802017 container died 900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-986ced0276beb41d9f2dc1551f84ff1d60eea1ae56b53cb07aed3a2a4c8590f6-merged.mount: Deactivated successfully.
Jan 27 08:49:03 compute-0 podman[248663]: 2026-01-27 08:49:03.811593484 +0000 UTC m=+0.984071978 container remove 900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:49:03 compute-0 systemd[1]: libpod-conmon-900ca2205db6c47ad6133b3643f575bc267b2979910343d29d7f3b9dba87de28.scope: Deactivated successfully.
Jan 27 08:49:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:03 compute-0 sudo[248557]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:03 compute-0 sudo[248702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:03 compute-0 sudo[248702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:03 compute-0 sudo[248702]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:03 compute-0 sudo[248727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.984 247675 DEBUG oslo_concurrency.lockutils [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.986 247675 DEBUG oslo_concurrency.lockutils [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.986 247675 DEBUG oslo_concurrency.lockutils [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.987 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.988 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 08:49:03 compute-0 sudo[248727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.988 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.989 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.989 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.989 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.990 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.990 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 sudo[248727]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.991 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.991 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.992 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.992 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.992 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.993 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.993 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.993 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.994 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.994 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.994 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.995 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.995 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.996 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.996 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.996 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.997 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.997 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.997 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.998 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.998 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.999 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:03 compute-0 nova_compute[247671]: 2026-01-27 08:49:03.999 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:03.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.000 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.001 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.001 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.001 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.002 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.002 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.002 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.002 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.003 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.003 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.003 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.004 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.004 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.004 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.004 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.004 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.005 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.005 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.005 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.005 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.006 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.006 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.006 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.006 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.006 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.007 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.007 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.007 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.007 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.007 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.008 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.008 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.008 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.008 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.008 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.009 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.009 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.009 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.009 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.009 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.010 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.010 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.010 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.010 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.010 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.011 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.011 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.011 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.011 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.012 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.012 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.012 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.012 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.012 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.013 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.013 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.013 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.013 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.013 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.014 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.014 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.014 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.014 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.014 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.015 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.015 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.015 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.015 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.015 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.016 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.016 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.016 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.016 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.016 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.017 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.017 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.017 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.017 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.018 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.018 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.018 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.018 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.018 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.019 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.020 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.021 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.022 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.023 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.023 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.023 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.023 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.023 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.023 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.024 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.024 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.024 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.024 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.024 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.024 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.025 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.025 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.025 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.025 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.025 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.026 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.027 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.028 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.028 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.028 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.028 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.028 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.029 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.030 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.031 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.031 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.031 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.031 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.031 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.031 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.032 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.033 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.033 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.033 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.033 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.033 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.033 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.034 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.034 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.034 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.034 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.034 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.035 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.035 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.035 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.035 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.035 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.035 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.036 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.037 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.038 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.038 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.038 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.038 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.038 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.038 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.039 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.039 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.039 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.039 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.039 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.039 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.040 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.041 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.042 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.043 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.043 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.043 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.043 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.043 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.043 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.044 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.044 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.044 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.044 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.044 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.044 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.045 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.046 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 sudo[248752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.047 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.048 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.048 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.048 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.048 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.048 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 sudo[248752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.049 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.050 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.051 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.051 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.051 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.051 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.051 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 sudo[248752]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.052 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.053 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.054 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.055 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.056 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.056 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.056 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.056 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.056 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.057 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.057 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.057 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.057 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.057 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.057 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.058 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.059 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.060 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.061 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.061 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.061 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.061 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.061 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.061 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.062 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.063 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.064 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.065 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.066 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.067 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.068 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.068 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.068 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.068 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.068 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.069 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.070 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.070 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.070 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.070 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.070 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.070 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.071 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.072 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.073 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.074 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.075 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.075 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.075 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.075 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.075 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.075 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.076 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.076 247675 WARNING oslo_config.cfg [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 08:49:04 compute-0 nova_compute[247671]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 08:49:04 compute-0 nova_compute[247671]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 08:49:04 compute-0 nova_compute[247671]: and ``live_migration_inbound_addr`` respectively.
Jan 27 08:49:04 compute-0 nova_compute[247671]: ).  Its value may be silently ignored in the future.
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.076 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.076 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.077 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.077 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.077 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.077 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.077 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.077 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.078 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.078 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.078 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.078 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.078 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.078 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rbd_secret_uuid        = 281e9bde-2795-59f4-98ac-90cf5b49a2de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.079 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.080 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.080 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.080 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.080 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.080 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.080 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.081 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.081 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.081 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.081 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.081 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.081 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.082 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.082 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.082 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.082 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.082 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.082 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.083 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.084 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.085 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.085 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.085 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.085 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.085 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.085 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.086 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.086 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.086 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.086 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.086 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.086 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.087 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.088 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.088 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.088 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.088 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.088 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.088 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.089 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.089 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.089 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.089 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.089 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.089 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.090 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.090 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.090 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.090 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.090 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.090 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.091 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.092 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.093 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.093 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.093 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.093 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.093 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.093 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.094 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.094 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.094 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.094 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.095 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.095 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.095 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.095 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.095 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.095 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.096 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.097 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.097 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.097 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.097 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.097 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.098 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.098 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.098 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.098 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.098 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.098 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.099 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.099 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.099 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.099 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.099 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.099 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.100 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.100 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.100 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.100 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.100 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.100 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.101 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.101 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.101 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.101 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.101 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.101 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.102 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 sudo[248777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 sudo[248777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.103 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.104 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.105 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.105 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.105 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.105 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.105 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.106 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.107 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.108 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.108 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.108 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.108 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.108 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.108 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.109 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.110 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.111 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.112 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.113 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.114 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.115 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.115 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.115 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.115 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.115 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.115 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.116 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.117 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.117 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.117 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.117 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.117 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.117 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.118 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.119 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.120 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.121 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.121 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.121 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.121 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.121 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.121 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.122 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.123 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.124 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.124 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.124 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.124 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.124 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.124 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.125 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.126 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.127 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.128 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.129 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.130 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.131 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.132 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.133 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.134 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.135 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.136 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.137 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.138 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.139 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.140 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.140 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.140 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.140 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.140 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.140 247675 DEBUG oslo_service.service [None req-15220da7-97dc-4183-b9b4-27070654cb6d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.142 247675 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.170 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.171 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.171 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.171 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 08:49:04 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 08:49:04 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.258 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7eff1776cac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.262 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7eff1776cac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 08:49:04 compute-0 nova_compute[247671]: 2026-01-27 08:49:04.263 247675 INFO nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Connection event '1' reason 'None'
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.450689057 +0000 UTC m=+0.040240755 container create 92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:49:04 compute-0 systemd[1]: Started libpod-conmon-92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f.scope.
Jan 27 08:49:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.521213371 +0000 UTC m=+0.110765149 container init 92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.528244215 +0000 UTC m=+0.117795913 container start 92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.531825193 +0000 UTC m=+0.121376911 container attach 92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:49:04 compute-0 systemd[1]: libpod-92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f.scope: Deactivated successfully.
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.435989164 +0000 UTC m=+0.025540882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:49:04 compute-0 quirky_chaplygin[248912]: 167 167
Jan 27 08:49:04 compute-0 conmon[248912]: conmon 92a8fd9478fec83970b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f.scope/container/memory.events
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.534463355 +0000 UTC m=+0.124015053 container died 92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 08:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0c41232165ee0a0d15efe054062bbf291e7854659de4b2787c462fb2ec5a0e6-merged.mount: Deactivated successfully.
Jan 27 08:49:04 compute-0 podman[248895]: 2026-01-27 08:49:04.567992345 +0000 UTC m=+0.157544033 container remove 92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:49:04 compute-0 systemd[1]: libpod-conmon-92a8fd9478fec83970b8dbce9066d37a2d2fdf45e0dda14e370fd38f498c706f.scope: Deactivated successfully.
Jan 27 08:49:04 compute-0 ceph-mon[74357]: pgmap v748: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:04 compute-0 podman[248935]: 2026-01-27 08:49:04.709545449 +0000 UTC m=+0.038698743 container create a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:49:04 compute-0 systemd[1]: Started libpod-conmon-a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b.scope.
Jan 27 08:49:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b91a4ebd0c4592ecace1023ebefeb200168d03a8c66af2d75fe9ba8df3a5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b91a4ebd0c4592ecace1023ebefeb200168d03a8c66af2d75fe9ba8df3a5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b91a4ebd0c4592ecace1023ebefeb200168d03a8c66af2d75fe9ba8df3a5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b91a4ebd0c4592ecace1023ebefeb200168d03a8c66af2d75fe9ba8df3a5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:04 compute-0 podman[248935]: 2026-01-27 08:49:04.693755625 +0000 UTC m=+0.022908929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:49:04 compute-0 podman[248935]: 2026-01-27 08:49:04.796741681 +0000 UTC m=+0.125895055 container init a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:49:04 compute-0 podman[248935]: 2026-01-27 08:49:04.802378886 +0000 UTC m=+0.131532170 container start a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:49:04 compute-0 podman[248935]: 2026-01-27 08:49:04.805723707 +0000 UTC m=+0.134877011 container attach a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.239 247675 WARNING nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.240 247675 DEBUG nova.virt.libvirt.volume.mount [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.254 247675 INFO nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]: 
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <host>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <uuid>da3c9646-1d5e-49f1-b628-025d3ab5e115</uuid>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <arch>x86_64</arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model>EPYC-Rome-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <vendor>AMD</vendor>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <microcode version='16777317'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <signature family='23' model='49' stepping='0'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='x2apic'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='tsc-deadline'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='osxsave'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='hypervisor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='tsc_adjust'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='spec-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='stibp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='arch-capabilities'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='cmp_legacy'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='topoext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='virt-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='lbrv'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='tsc-scale'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='vmcb-clean'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='pause-filter'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='pfthreshold'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='svme-addr-chk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='rdctl-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='mds-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature name='pschange-mc-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <pages unit='KiB' size='4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <pages unit='KiB' size='2048'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <pages unit='KiB' size='1048576'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <power_management>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <suspend_mem/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </power_management>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <iommu support='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <migration_features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <live/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <uri_transports>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <uri_transport>tcp</uri_transport>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <uri_transport>rdma</uri_transport>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </uri_transports>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </migration_features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <topology>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <cells num='1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <cell id='0'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           <memory unit='KiB'>7864316</memory>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           <distances>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <sibling id='0' value='10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           </distances>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           <cpus num='8'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:           </cpus>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         </cell>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </cells>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </topology>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <cache>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </cache>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <secmodel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model>selinux</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <doi>0</doi>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </secmodel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <secmodel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model>dac</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <doi>0</doi>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </secmodel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </host>
Jan 27 08:49:05 compute-0 nova_compute[247671]: 
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <guest>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <os_type>hvm</os_type>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <arch name='i686'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <wordsize>32</wordsize>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <domain type='qemu'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <domain type='kvm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <pae/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <nonpae/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <acpi default='on' toggle='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <apic default='on' toggle='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <cpuselection/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <deviceboot/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <disksnapshot default='on' toggle='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <externalSnapshot/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </guest>
Jan 27 08:49:05 compute-0 nova_compute[247671]: 
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <guest>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <os_type>hvm</os_type>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <arch name='x86_64'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <wordsize>64</wordsize>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <domain type='qemu'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <domain type='kvm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <acpi default='on' toggle='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <apic default='on' toggle='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <cpuselection/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <deviceboot/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <disksnapshot default='on' toggle='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <externalSnapshot/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </guest>
Jan 27 08:49:05 compute-0 nova_compute[247671]: 
Jan 27 08:49:05 compute-0 nova_compute[247671]: </capabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]: 
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.261 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.296 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 08:49:05 compute-0 nova_compute[247671]: <domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <domain>kvm</domain>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <arch>i686</arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <vcpu max='4096'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <iothreads supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <os supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='firmware'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <loader supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>rom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pflash</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='readonly'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>yes</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='secure'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </loader>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </os>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-passthrough' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='hostPassthroughMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='maximum' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='maximumMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-model' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <vendor>AMD</vendor>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='x2apic'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='hypervisor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='stibp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='overflow-recov'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='succor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lbrv'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-scale'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='flushbyasid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pause-filter'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pfthreshold'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='disable' name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='custom' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Dhyana-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v6'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v7'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <memoryBacking supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='sourceType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>anonymous</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>memfd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </memoryBacking>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <disk supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='diskDevice'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>disk</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cdrom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>floppy</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>lun</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>fdc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>sata</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </disk>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <graphics supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vnc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egl-headless</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </graphics>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <video supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='modelType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vga</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cirrus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>none</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>bochs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ramfb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </video>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hostdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='mode'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>subsystem</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='startupPolicy'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>mandatory</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>requisite</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>optional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='subsysType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pci</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='capsType'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='pciBackend'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hostdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <rng supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>random</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </rng>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <filesystem supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='driverType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>path</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>handle</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtiofs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </filesystem>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tpm supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-tis</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-crb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emulator</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>external</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendVersion'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>2.0</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </tpm>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <redirdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </redirdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <channel supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </channel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <crypto supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </crypto>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <interface supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>passt</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </interface>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <panic supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>isa</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>hyperv</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </panic>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <console supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>null</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dev</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pipe</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stdio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>udp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tcp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu-vdagent</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </console>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <gic supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <vmcoreinfo supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <genid supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backingStoreInput supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backup supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <async-teardown supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <s390-pv supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <ps2 supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tdx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sev supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sgx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hyperv supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='features'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>relaxed</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vapic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>spinlocks</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vpindex</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>runtime</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>synic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stimer</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reset</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vendor_id</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>frequencies</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reenlightenment</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tlbflush</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ipi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>avic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emsr_bitmap</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>xmm_input</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <spinlocks>4095</spinlocks>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <stimer_direct>on</stimer_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hyperv>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <launchSecurity supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </features>
Jan 27 08:49:05 compute-0 nova_compute[247671]: </domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.309 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 08:49:05 compute-0 nova_compute[247671]: <domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <domain>kvm</domain>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <arch>i686</arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <vcpu max='240'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <iothreads supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <os supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='firmware'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <loader supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>rom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pflash</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='readonly'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>yes</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='secure'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </loader>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </os>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-passthrough' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='hostPassthroughMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='maximum' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='maximumMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-model' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <vendor>AMD</vendor>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='x2apic'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='hypervisor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='stibp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='overflow-recov'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='succor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lbrv'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-scale'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='flushbyasid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pause-filter'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pfthreshold'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='disable' name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='custom' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Dhyana-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v6'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v7'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <memoryBacking supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='sourceType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>anonymous</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>memfd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </memoryBacking>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <disk supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='diskDevice'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>disk</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cdrom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>floppy</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>lun</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ide</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>fdc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>sata</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </disk>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <graphics supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vnc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egl-headless</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </graphics>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <video supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='modelType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vga</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cirrus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>none</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>bochs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ramfb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </video>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hostdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='mode'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>subsystem</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='startupPolicy'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>mandatory</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>requisite</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>optional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='subsysType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pci</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='capsType'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='pciBackend'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hostdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <rng supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>random</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </rng>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <filesystem supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='driverType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>path</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>handle</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtiofs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </filesystem>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tpm supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-tis</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-crb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emulator</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>external</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendVersion'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>2.0</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </tpm>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <redirdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </redirdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <channel supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </channel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <crypto supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </crypto>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <interface supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>passt</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </interface>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <panic supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>isa</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>hyperv</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </panic>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <console supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>null</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dev</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pipe</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stdio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>udp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tcp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu-vdagent</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </console>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <gic supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <vmcoreinfo supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <genid supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backingStoreInput supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backup supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <async-teardown supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <s390-pv supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <ps2 supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tdx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sev supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sgx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hyperv supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='features'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>relaxed</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vapic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>spinlocks</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vpindex</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>runtime</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>synic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stimer</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reset</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vendor_id</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>frequencies</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reenlightenment</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tlbflush</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ipi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>avic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emsr_bitmap</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>xmm_input</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <spinlocks>4095</spinlocks>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <stimer_direct>on</stimer_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hyperv>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <launchSecurity supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </features>
Jan 27 08:49:05 compute-0 nova_compute[247671]: </domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.356 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.363 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 08:49:05 compute-0 nova_compute[247671]: <domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <domain>kvm</domain>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <arch>x86_64</arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <vcpu max='240'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <iothreads supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <os supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='firmware'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <loader supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>rom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pflash</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='readonly'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>yes</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='secure'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </loader>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </os>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-passthrough' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='hostPassthroughMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='maximum' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='maximumMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-model' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <vendor>AMD</vendor>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='x2apic'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='hypervisor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='stibp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='overflow-recov'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='succor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lbrv'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-scale'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='flushbyasid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pause-filter'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pfthreshold'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='disable' name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='custom' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Dhyana-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v6'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v7'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <memoryBacking supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='sourceType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>anonymous</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>memfd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </memoryBacking>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <disk supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='diskDevice'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>disk</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cdrom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>floppy</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>lun</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ide</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>fdc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>sata</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </disk>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <graphics supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vnc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egl-headless</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </graphics>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <video supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='modelType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vga</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cirrus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>none</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>bochs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ramfb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </video>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hostdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='mode'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>subsystem</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='startupPolicy'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>mandatory</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>requisite</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>optional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='subsysType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pci</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='capsType'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='pciBackend'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hostdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <rng supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>random</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </rng>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <filesystem supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='driverType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>path</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>handle</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtiofs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </filesystem>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tpm supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-tis</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-crb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emulator</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>external</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendVersion'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>2.0</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </tpm>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <redirdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </redirdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <channel supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </channel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <crypto supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </crypto>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <interface supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>passt</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </interface>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <panic supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>isa</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>hyperv</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </panic>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <console supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>null</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dev</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pipe</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stdio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>udp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tcp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu-vdagent</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </console>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <gic supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <vmcoreinfo supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <genid supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backingStoreInput supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backup supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <async-teardown supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <s390-pv supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <ps2 supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tdx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sev supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sgx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hyperv supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='features'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>relaxed</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vapic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>spinlocks</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vpindex</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>runtime</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>synic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stimer</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reset</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vendor_id</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>frequencies</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reenlightenment</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tlbflush</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ipi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>avic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emsr_bitmap</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>xmm_input</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <spinlocks>4095</spinlocks>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <stimer_direct>on</stimer_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hyperv>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <launchSecurity supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </features>
Jan 27 08:49:05 compute-0 nova_compute[247671]: </domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.441 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 27 08:49:05 compute-0 nova_compute[247671]: <domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <domain>kvm</domain>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <arch>x86_64</arch>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <vcpu max='4096'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <iothreads supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <os supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='firmware'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>efi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <loader supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>rom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pflash</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='readonly'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>yes</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='secure'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>yes</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>no</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </loader>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </os>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-passthrough' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='hostPassthroughMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='maximum' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='maximumMigratable'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>on</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>off</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='host-model' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <vendor>AMD</vendor>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='x2apic'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='hypervisor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='stibp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='overflow-recov'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='succor'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lbrv'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='tsc-scale'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='flushbyasid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pause-filter'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='pfthreshold'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <feature policy='disable' name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <mode name='custom' supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Broadwell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='ClearwaterForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ddpd-u'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sha512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm3'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sm4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Cooperlake-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Denverton-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Dhyana-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Milan-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Rome-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-Turin-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amd-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='auto-ibrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vp2intersect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fs-gs-base-ns'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibpb-brtype'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='no-nested-data-bp'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='null-sel-clr-base'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='perfmon-v2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbpb'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='srso-user-kernel-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='stibp-always-on'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='EPYC-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='GraniteRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-128'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-256'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx10-512'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='prefetchiti'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Haswell-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v6'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Icelake-Server-v7'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='IvyBridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='KnightsMill-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4fmaps'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-4vnniw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512er'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512pf'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G4-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Opteron_G5-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fma4'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tbm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xop'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SapphireRapids-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='amx-tile'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-bf16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-fp16'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512-vpopcntdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bitalg'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vbmi2'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrc'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fzrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='la57'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='taa-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='tsx-ldtrk'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='SierraForest-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ifma'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-ne-convert'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx-vnni-int8'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bhi-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='bus-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cmpccxadd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fbsdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='fsrs'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ibrs-all'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='intel-psfd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ipred-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='lam'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mcdt-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pbrsb-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='psdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rrsba-ctrl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='sbdr-ssdp-no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='serialize'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vaes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='vpclmulqdq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Client-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='hle'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='rtm'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Skylake-Server-v5'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512bw'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512cd'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512dq'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512f'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='avx512vl'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='invpcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pcid'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='pku'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='mpx'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v2'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v3'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='core-capability'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='split-lock-detect'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='Snowridge-v4'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='cldemote'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='erms'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='gfni'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdir64b'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='movdiri'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='xsaves'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='athlon-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='core2duo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='coreduo-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='n270-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='ss'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <blockers model='phenom-v1'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnow'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <feature name='3dnowext'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </blockers>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </mode>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <memoryBacking supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <enum name='sourceType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>anonymous</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <value>memfd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </memoryBacking>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <disk supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='diskDevice'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>disk</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cdrom</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>floppy</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>lun</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>fdc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>sata</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </disk>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <graphics supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vnc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egl-headless</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </graphics>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <video supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='modelType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vga</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>cirrus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>none</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>bochs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ramfb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </video>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hostdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='mode'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>subsystem</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='startupPolicy'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>mandatory</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>requisite</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>optional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='subsysType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pci</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>scsi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='capsType'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='pciBackend'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hostdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <rng supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtio-non-transitional</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>random</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>egd</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </rng>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <filesystem supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='driverType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>path</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>handle</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>virtiofs</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </filesystem>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tpm supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-tis</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tpm-crb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emulator</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>external</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendVersion'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>2.0</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </tpm>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <redirdev supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='bus'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>usb</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </redirdev>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <channel supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </channel>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <crypto supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendModel'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>builtin</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </crypto>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <interface supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='backendType'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>default</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>passt</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </interface>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <panic supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='model'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>isa</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>hyperv</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </panic>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <console supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='type'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>null</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vc</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pty</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dev</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>file</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>pipe</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stdio</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>udp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tcp</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>unix</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>qemu-vdagent</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>dbus</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </console>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </devices>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <features>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <gic supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <vmcoreinfo supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <genid supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backingStoreInput supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <backup supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <async-teardown supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <s390-pv supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <ps2 supported='yes'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <tdx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sev supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <sgx supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <hyperv supported='yes'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <enum name='features'>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>relaxed</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vapic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>spinlocks</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vpindex</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>runtime</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>synic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>stimer</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reset</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>vendor_id</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>frequencies</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>reenlightenment</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>tlbflush</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>ipi</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>avic</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>emsr_bitmap</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <value>xmm_input</value>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </enum>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       <defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <spinlocks>4095</spinlocks>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <stimer_direct>on</stimer_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:49:05 compute-0 nova_compute[247671]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:49:05 compute-0 nova_compute[247671]:       </defaults>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     </hyperv>
Jan 27 08:49:05 compute-0 nova_compute[247671]:     <launchSecurity supported='no'/>
Jan 27 08:49:05 compute-0 nova_compute[247671]:   </features>
Jan 27 08:49:05 compute-0 nova_compute[247671]: </domainCapabilities>
Jan 27 08:49:05 compute-0 nova_compute[247671]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.524 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.525 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.525 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.535 247675 INFO nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Secure Boot support detected
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.537 247675 INFO nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.537 247675 INFO nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.552 247675 DEBUG nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 27 08:49:05 compute-0 nova_compute[247671]:   <model>Nehalem</model>
Jan 27 08:49:05 compute-0 nova_compute[247671]: </cpu>
Jan 27 08:49:05 compute-0 nova_compute[247671]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.554 247675 DEBUG nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.630 247675 INFO nova.virt.node [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Determined node identity 083cbb1c-f2d4-4883-a91d-8697c4453517 from /var/lib/nova/compute_id
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.664 247675 WARNING nova.compute.manager [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Compute nodes ['083cbb1c-f2d4-4883-a91d-8697c4453517'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 27 08:49:05 compute-0 agitated_newton[248951]: {
Jan 27 08:49:05 compute-0 agitated_newton[248951]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:49:05 compute-0 agitated_newton[248951]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:49:05 compute-0 agitated_newton[248951]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:49:05 compute-0 agitated_newton[248951]:         "osd_id": 0,
Jan 27 08:49:05 compute-0 agitated_newton[248951]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:49:05 compute-0 agitated_newton[248951]:         "type": "bluestore"
Jan 27 08:49:05 compute-0 agitated_newton[248951]:     }
Jan 27 08:49:05 compute-0 agitated_newton[248951]: }
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.737 247675 INFO nova.compute.manager [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 27 08:49:05 compute-0 systemd[1]: libpod-a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b.scope: Deactivated successfully.
Jan 27 08:49:05 compute-0 podman[248935]: 2026-01-27 08:49:05.740964885 +0000 UTC m=+1.070118179 container died a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:49:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-326b91a4ebd0c4592ecace1023ebefeb200168d03a8c66af2d75fe9ba8df3a5c-merged.mount: Deactivated successfully.
Jan 27 08:49:05 compute-0 podman[248935]: 2026-01-27 08:49:05.785554788 +0000 UTC m=+1.114708072 container remove a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:49:05 compute-0 systemd[1]: libpod-conmon-a9d7572be505cf978656eda8517d98343d93da4a82da7ccb877187e77d1b591b.scope: Deactivated successfully.
Jan 27 08:49:05 compute-0 sudo[248777]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:49:05 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:49:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:49:05 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:49:05 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8e8aad95-97d6-4f12-978f-68cb1e17bb60 does not exist
Jan 27 08:49:05 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7b7038a3-e438-455e-b3a6-0ed2721cf5b9 does not exist
Jan 27 08:49:05 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 913dae68-98d6-4b2e-9ac4-8acde5e3b1f8 does not exist
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.859 247675 WARNING nova.compute.manager [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.859 247675 DEBUG oslo_concurrency.lockutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.860 247675 DEBUG oslo_concurrency.lockutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.860 247675 DEBUG oslo_concurrency.lockutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.860 247675 DEBUG nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:49:05 compute-0 nova_compute[247671]: 2026-01-27 08:49:05.861 247675 DEBUG oslo_concurrency.processutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:49:05 compute-0 sudo[248996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:05 compute-0 sudo[248996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:05 compute-0 sudo[248996]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:05 compute-0 sudo[249022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:49:05 compute-0 sudo[249022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:05 compute-0 sudo[249022]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:49:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180553086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.293 247675 DEBUG oslo_concurrency.processutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:49:06 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 08:49:06 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 08:49:06 compute-0 podman[249068]: 2026-01-27 08:49:06.420582705 +0000 UTC m=+0.067385035 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.655 247675 WARNING nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.657 247675 DEBUG nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5196MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.657 247675 DEBUG oslo_concurrency.lockutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.657 247675 DEBUG oslo_concurrency.lockutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.691 247675 WARNING nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] No compute node record for compute-0.ctlplane.example.com:083cbb1c-f2d4-4883-a91d-8697c4453517: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 083cbb1c-f2d4-4883-a91d-8697c4453517 could not be found.
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.759 247675 INFO nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 083cbb1c-f2d4-4883-a91d-8697c4453517
Jan 27 08:49:06 compute-0 ceph-mon[74357]: pgmap v749: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:06 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:49:06 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:49:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1180553086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3016170822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2708347631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.879 247675 DEBUG nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:49:06 compute-0 nova_compute[247671]: 2026-01-27 08:49:06.879 247675 DEBUG nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:49:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.433 247675 INFO nova.scheduler.client.report [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] [req-e4ccc8a5-b729-4ed9-8a48-326d5a02ac54] Created resource provider record via placement API for resource provider with UUID 083cbb1c-f2d4-4883-a91d-8697c4453517 and name compute-0.ctlplane.example.com.
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.464 247675 DEBUG oslo_concurrency.processutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:49:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:49:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355395210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.873 247675 DEBUG oslo_concurrency.processutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.881 247675 DEBUG nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 27 08:49:07 compute-0 nova_compute[247671]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.881 247675 INFO nova.virt.libvirt.host [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] kernel doesn't support AMD SEV
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.883 247675 DEBUG nova.compute.provider_tree [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.884 247675 DEBUG nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.888 247675 DEBUG nova.virt.libvirt.driver [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Libvirt baseline CPU <cpu>
Jan 27 08:49:07 compute-0 nova_compute[247671]:   <arch>x86_64</arch>
Jan 27 08:49:07 compute-0 nova_compute[247671]:   <model>Nehalem</model>
Jan 27 08:49:07 compute-0 nova_compute[247671]:   <vendor>AMD</vendor>
Jan 27 08:49:07 compute-0 nova_compute[247671]:   <topology sockets="8" cores="1" threads="1"/>
Jan 27 08:49:07 compute-0 nova_compute[247671]: </cpu>
Jan 27 08:49:07 compute-0 nova_compute[247671]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.992 247675 DEBUG nova.scheduler.client.report [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Updated inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.992 247675 DEBUG nova.compute.provider_tree [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Updating resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 08:49:07 compute-0 nova_compute[247671]: 2026-01-27 08:49:07.992 247675 DEBUG nova.compute.provider_tree [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 08:49:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:08.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:08 compute-0 nova_compute[247671]: 2026-01-27 08:49:08.160 247675 DEBUG nova.compute.provider_tree [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Updating resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 08:49:08 compute-0 nova_compute[247671]: 2026-01-27 08:49:08.191 247675 DEBUG nova.compute.resource_tracker [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:49:08 compute-0 nova_compute[247671]: 2026-01-27 08:49:08.191 247675 DEBUG oslo_concurrency.lockutils [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:49:08 compute-0 nova_compute[247671]: 2026-01-27 08:49:08.192 247675 DEBUG nova.service [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 27 08:49:08 compute-0 nova_compute[247671]: 2026-01-27 08:49:08.340 247675 DEBUG nova.service [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 27 08:49:08 compute-0 nova_compute[247671]: 2026-01-27 08:49:08.340 247675 DEBUG nova.servicegroup.drivers.db [None req-942b4d11-873e-4091-869c-013325b8bbd6 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 27 08:49:08 compute-0 ceph-mon[74357]: pgmap v750: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2355395210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/634384332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2690418698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:10.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:10 compute-0 ceph-mon[74357]: pgmap v751: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:12.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:12 compute-0 ceph-mon[74357]: pgmap v752: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:13.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:14.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:14 compute-0 ceph-mon[74357]: pgmap v753: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:49:14
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:49:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:16.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:16 compute-0 ceph-mon[74357]: pgmap v754: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:17.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:18.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:18 compute-0 ceph-mon[74357]: pgmap v755: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:20.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:21 compute-0 ceph-mon[74357]: pgmap v756: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:21 compute-0 sudo[249140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:21 compute-0 sudo[249140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:21 compute-0 sudo[249140]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:21 compute-0 sudo[249165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:21 compute-0 sudo[249165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:21 compute-0 sudo[249165]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:21.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:22.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:22 compute-0 podman[249190]: 2026-01-27 08:49:22.332211573 +0000 UTC m=+0.133577255 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 08:49:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:23 compute-0 ceph-mon[74357]: pgmap v757: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:23.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:24.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:49:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:49:25 compute-0 ceph-mon[74357]: pgmap v758: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:25.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:26.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:27 compute-0 ceph-mon[74357]: pgmap v759: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:27.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:28.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:29 compute-0 ceph-mon[74357]: pgmap v760: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:29.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:30.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:49:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 6333 writes, 25K keys, 6333 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6333 writes, 1165 syncs, 5.44 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 438 writes, 721 keys, 438 commit groups, 1.0 writes per commit group, ingest: 0.23 MB, 0.00 MB/s
                                           Interval WAL: 438 writes, 194 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558693053610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 08:49:31 compute-0 ceph-mon[74357]: pgmap v761: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:31.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:32.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1015772572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:49:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1015772572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:49:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:33 compute-0 ceph-mon[74357]: pgmap v762: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3054827984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:49:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3054827984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:49:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/623553888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:49:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/623553888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:49:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:49:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:33.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:49:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:34.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:35 compute-0 ceph-mon[74357]: pgmap v763: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:36.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Check health
Jan 27 08:49:37 compute-0 podman[249224]: 2026-01-27 08:49:37.270546625 +0000 UTC m=+0.083604592 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:49:37 compute-0 ceph-mon[74357]: pgmap v764: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:38.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:38 compute-0 ceph-mon[74357]: pgmap v765: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:39.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:40.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:40 compute-0 ceph-mon[74357]: pgmap v766: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:41 compute-0 sudo[249245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:41 compute-0 sudo[249245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:41 compute-0 sudo[249245]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:41 compute-0 sudo[249270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:49:41 compute-0 sudo[249270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:49:41 compute-0 sudo[249270]: pam_unix(sudo:session): session closed for user root
Jan 27 08:49:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:42.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:42 compute-0 ceph-mon[74357]: pgmap v767: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:43 compute-0 nova_compute[247671]: 2026-01-27 08:49:43.342 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:43 compute-0 nova_compute[247671]: 2026-01-27 08:49:43.383 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:43.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:44 compute-0 ceph-mon[74357]: pgmap v768: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:46.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:46 compute-0 ceph-mon[74357]: pgmap v769: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:49:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:49:48 compute-0 ceph-mon[74357]: pgmap v770: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:49:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:49:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:50.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:50 compute-0 ceph-mon[74357]: pgmap v771: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:52.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:52 compute-0 ceph-mon[74357]: pgmap v772: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:53 compute-0 podman[249301]: 2026-01-27 08:49:53.265404782 +0000 UTC m=+0.082745547 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 27 08:49:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:49:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:49:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:54.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:49:54.232 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:49:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:49:54.233 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:49:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:49:54.233 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:49:54 compute-0 ceph-mon[74357]: pgmap v773: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.425 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.425 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.426 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.426 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:49:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.482 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.482 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.482 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.482 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.483 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.483 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.483 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.483 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.483 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.528 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.529 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.529 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.529 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:49:55 compute-0 nova_compute[247671]: 2026-01-27 08:49:55.529 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:49:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:49:55 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822800585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.008 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:49:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.160 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.161 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5240MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.161 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.161 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.233 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.234 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.279 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:49:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:49:56 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077530010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.687 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.692 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.714 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.716 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:49:56 compute-0 nova_compute[247671]: 2026-01-27 08:49:56.716 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:49:56 compute-0 ceph-mon[74357]: pgmap v774: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:56 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/822800585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:56 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1362607834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:56 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4077530010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:49:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:49:57 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/207548125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:57 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3817551890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:49:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:49:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:49:58.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:49:58 compute-0 ceph-mon[74357]: pgmap v775: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:49:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:49:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:49:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:49:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:00 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 08:50:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2533644762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:50:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:01 compute-0 ceph-mon[74357]: pgmap v776: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:01 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 08:50:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:01 compute-0 sudo[249375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:01 compute-0 sudo[249375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:01 compute-0 sudo[249375]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:01 compute-0 sudo[249400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:01 compute-0 sudo[249400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:01 compute-0 sudo[249400]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:50:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:50:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:02.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:03 compute-0 ceph-mon[74357]: pgmap v777: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:04.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:05 compute-0 ceph-mon[74357]: pgmap v778: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:05.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:06.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:06 compute-0 sudo[249428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:06 compute-0 sudo[249428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:06 compute-0 sudo[249428]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:06 compute-0 sudo[249453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:50:06 compute-0 sudo[249453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:06 compute-0 sudo[249453]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:06 compute-0 sudo[249478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:06 compute-0 sudo[249478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:06 compute-0 sudo[249478]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:06 compute-0 sudo[249503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:50:06 compute-0 sudo[249503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:50:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:50:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:50:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:50:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:07 compute-0 ceph-mon[74357]: pgmap v779: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:07 compute-0 sudo[249503]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:50:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6a6cdf2b-1672-47d4-a0f6-6b345e00a791 does not exist
Jan 27 08:50:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a3f4b721-625a-423f-ad45-07fc2ebf8011 does not exist
Jan 27 08:50:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7a1c4ed1-7675-4737-a8f5-3f2a59b20ac7 does not exist
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:50:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:50:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:50:07 compute-0 sudo[249561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:07 compute-0 sudo[249561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:07 compute-0 sudo[249561]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:07.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:07 compute-0 sudo[249592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:50:07 compute-0 sudo[249592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:07 compute-0 sudo[249592]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:07 compute-0 podman[249585]: 2026-01-27 08:50:07.957244691 +0000 UTC m=+0.085735280 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 08:50:07 compute-0 sudo[249632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:07 compute-0 sudo[249632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:07 compute-0 sudo[249632]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:08 compute-0 sudo[249657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:50:08 compute-0 sudo[249657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:08.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:50:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.371364995 +0000 UTC m=+0.046732977 container create cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:08 compute-0 systemd[1]: Started libpod-conmon-cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b.scope.
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.350676725 +0000 UTC m=+0.026044687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:50:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.470544283 +0000 UTC m=+0.145912325 container init cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.479415767 +0000 UTC m=+0.154783749 container start cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.483446449 +0000 UTC m=+0.158814441 container attach cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:50:08 compute-0 pensive_hopper[249734]: 167 167
Jan 27 08:50:08 compute-0 systemd[1]: libpod-cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b.scope: Deactivated successfully.
Jan 27 08:50:08 compute-0 conmon[249734]: conmon cd51d3883c7ce5488f1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b.scope/container/memory.events
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.487602183 +0000 UTC m=+0.162970165 container died cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c5f585c460b35fa64f02e8d2c051cf5718a7f7849f0d5f063eb60d9eec19c2-merged.mount: Deactivated successfully.
Jan 27 08:50:08 compute-0 podman[249718]: 2026-01-27 08:50:08.526904834 +0000 UTC m=+0.202272766 container remove cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:50:08 compute-0 systemd[1]: libpod-conmon-cd51d3883c7ce5488f1c3ac575f9f769e8aa7e0dd62c744c7297a6ddee6a7b7b.scope: Deactivated successfully.
Jan 27 08:50:08 compute-0 podman[249757]: 2026-01-27 08:50:08.704518761 +0000 UTC m=+0.035370785 container create cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:08 compute-0 systemd[1]: Started libpod-conmon-cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f.scope.
Jan 27 08:50:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbca7b079f694c4f9af3c1dcb4bce2adb362901d98ac4f30494eb63e5908fa30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbca7b079f694c4f9af3c1dcb4bce2adb362901d98ac4f30494eb63e5908fa30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbca7b079f694c4f9af3c1dcb4bce2adb362901d98ac4f30494eb63e5908fa30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbca7b079f694c4f9af3c1dcb4bce2adb362901d98ac4f30494eb63e5908fa30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbca7b079f694c4f9af3c1dcb4bce2adb362901d98ac4f30494eb63e5908fa30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:08 compute-0 podman[249757]: 2026-01-27 08:50:08.689831266 +0000 UTC m=+0.020683310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:50:08 compute-0 podman[249757]: 2026-01-27 08:50:08.788090279 +0000 UTC m=+0.118942383 container init cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 08:50:08 compute-0 podman[249757]: 2026-01-27 08:50:08.799677079 +0000 UTC m=+0.130529103 container start cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 08:50:08 compute-0 podman[249757]: 2026-01-27 08:50:08.802412604 +0000 UTC m=+0.133264658 container attach cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:50:09 compute-0 ceph-mon[74357]: pgmap v780: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:09 compute-0 awesome_swanson[249772]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:50:09 compute-0 awesome_swanson[249772]: --> relative data size: 1.0
Jan 27 08:50:09 compute-0 awesome_swanson[249772]: --> All data devices are unavailable
Jan 27 08:50:09 compute-0 systemd[1]: libpod-cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f.scope: Deactivated successfully.
Jan 27 08:50:09 compute-0 podman[249757]: 2026-01-27 08:50:09.609601232 +0000 UTC m=+0.940453256 container died cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbca7b079f694c4f9af3c1dcb4bce2adb362901d98ac4f30494eb63e5908fa30-merged.mount: Deactivated successfully.
Jan 27 08:50:09 compute-0 podman[249757]: 2026-01-27 08:50:09.661276843 +0000 UTC m=+0.992128867 container remove cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:50:09 compute-0 systemd[1]: libpod-conmon-cd30fad6098edb10075409d5e25712c6efb717bd368425bb8887bd687e44b28f.scope: Deactivated successfully.
Jan 27 08:50:09 compute-0 sudo[249657]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:09 compute-0 sudo[249802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:09 compute-0 sudo[249802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:09 compute-0 sudo[249802]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:09 compute-0 sudo[249827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:50:09 compute-0 sudo[249827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:09 compute-0 sudo[249827]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:09 compute-0 sudo[249852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:09 compute-0 sudo[249852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:09 compute-0 sudo[249852]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:09.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:09 compute-0 sudo[249877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:50:09 compute-0 sudo[249877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:10.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.211179132 +0000 UTC m=+0.038153100 container create 52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:50:10 compute-0 systemd[1]: Started libpod-conmon-52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98.scope.
Jan 27 08:50:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.269838936 +0000 UTC m=+0.096812924 container init 52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.274960057 +0000 UTC m=+0.101934025 container start 52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:50:10 compute-0 frosty_nash[249958]: 167 167
Jan 27 08:50:10 compute-0 systemd[1]: libpod-52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98.scope: Deactivated successfully.
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.279099831 +0000 UTC m=+0.106073839 container attach 52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.280067118 +0000 UTC m=+0.107041096 container died 52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.19403461 +0000 UTC m=+0.021008598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-06e07536b18d2cad4a32bfe76bc1f72e9b28bc71869123acc7d32facb7ceacd2-merged.mount: Deactivated successfully.
Jan 27 08:50:10 compute-0 podman[249942]: 2026-01-27 08:50:10.32016481 +0000 UTC m=+0.147138778 container remove 52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:50:10 compute-0 systemd[1]: libpod-conmon-52f8141a4c9638ebcfc654bd688cbc669f10ea1362c02a665c14210cb696cd98.scope: Deactivated successfully.
Jan 27 08:50:10 compute-0 podman[249985]: 2026-01-27 08:50:10.48553705 +0000 UTC m=+0.041753910 container create 9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:10 compute-0 systemd[1]: Started libpod-conmon-9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c.scope.
Jan 27 08:50:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bb9d43a686dbabed13eff1589543001f1a30fab3b54de3df108e892146cc51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bb9d43a686dbabed13eff1589543001f1a30fab3b54de3df108e892146cc51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bb9d43a686dbabed13eff1589543001f1a30fab3b54de3df108e892146cc51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bb9d43a686dbabed13eff1589543001f1a30fab3b54de3df108e892146cc51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:10 compute-0 podman[249985]: 2026-01-27 08:50:10.546806066 +0000 UTC m=+0.103022956 container init 9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:50:10 compute-0 podman[249985]: 2026-01-27 08:50:10.553087719 +0000 UTC m=+0.109304579 container start 9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:50:10 compute-0 podman[249985]: 2026-01-27 08:50:10.556138113 +0000 UTC m=+0.112354993 container attach 9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:50:10 compute-0 podman[249985]: 2026-01-27 08:50:10.467959827 +0000 UTC m=+0.024176707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:50:11 compute-0 ceph-mon[74357]: pgmap v781: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:11 compute-0 keen_franklin[250002]: {
Jan 27 08:50:11 compute-0 keen_franklin[250002]:     "0": [
Jan 27 08:50:11 compute-0 keen_franklin[250002]:         {
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "devices": [
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "/dev/loop3"
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             ],
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "lv_name": "ceph_lv0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "lv_size": "7511998464",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "name": "ceph_lv0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "tags": {
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.cluster_name": "ceph",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.crush_device_class": "",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.encrypted": "0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.osd_id": "0",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.type": "block",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:                 "ceph.vdo": "0"
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             },
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "type": "block",
Jan 27 08:50:11 compute-0 keen_franklin[250002]:             "vg_name": "ceph_vg0"
Jan 27 08:50:11 compute-0 keen_franklin[250002]:         }
Jan 27 08:50:11 compute-0 keen_franklin[250002]:     ]
Jan 27 08:50:11 compute-0 keen_franklin[250002]: }
Jan 27 08:50:11 compute-0 systemd[1]: libpod-9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c.scope: Deactivated successfully.
Jan 27 08:50:11 compute-0 podman[249985]: 2026-01-27 08:50:11.283153175 +0000 UTC m=+0.839370055 container died 9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5bb9d43a686dbabed13eff1589543001f1a30fab3b54de3df108e892146cc51-merged.mount: Deactivated successfully.
Jan 27 08:50:11 compute-0 podman[249985]: 2026-01-27 08:50:11.361906081 +0000 UTC m=+0.918122941 container remove 9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:50:11 compute-0 systemd[1]: libpod-conmon-9828caaff938e39a71c4c16fc1b2db32e405687556b82682b7461ed5c04f647c.scope: Deactivated successfully.
Jan 27 08:50:11 compute-0 sudo[249877]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:11 compute-0 sudo[250025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:11 compute-0 sudo[250025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:11 compute-0 sudo[250025]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:11 compute-0 sudo[250050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:50:11 compute-0 sudo[250050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:11 compute-0 sudo[250050]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:11 compute-0 sudo[250075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:11 compute-0 sudo[250075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:11 compute-0 sudo[250075]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:11 compute-0 sudo[250100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:50:11 compute-0 sudo[250100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:11 compute-0 podman[250164]: 2026-01-27 08:50:11.962029622 +0000 UTC m=+0.041428880 container create b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:50:12 compute-0 systemd[1]: Started libpod-conmon-b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05.scope.
Jan 27 08:50:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:50:12 compute-0 podman[250164]: 2026-01-27 08:50:11.945016094 +0000 UTC m=+0.024415362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:50:12 compute-0 podman[250164]: 2026-01-27 08:50:12.050003853 +0000 UTC m=+0.129403121 container init b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jennings, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:50:12 compute-0 podman[250164]: 2026-01-27 08:50:12.055144095 +0000 UTC m=+0.134543383 container start b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jennings, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:50:12 compute-0 podman[250164]: 2026-01-27 08:50:12.05863831 +0000 UTC m=+0.138037588 container attach b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:50:12 compute-0 sad_jennings[250180]: 167 167
Jan 27 08:50:12 compute-0 systemd[1]: libpod-b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05.scope: Deactivated successfully.
Jan 27 08:50:12 compute-0 podman[250164]: 2026-01-27 08:50:12.061422817 +0000 UTC m=+0.140822065 container died b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jennings, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-707d87f0c6317130b001c1a3b4086fd936490980f1705e446bc9ff8b65360b7a-merged.mount: Deactivated successfully.
Jan 27 08:50:12 compute-0 podman[250164]: 2026-01-27 08:50:12.098292621 +0000 UTC m=+0.177691869 container remove b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jennings, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:50:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:12 compute-0 systemd[1]: libpod-conmon-b8c88e4e023aa25c09792580465ee830c70c59d204143387ac25d304c4d00e05.scope: Deactivated successfully.
Jan 27 08:50:12 compute-0 podman[250204]: 2026-01-27 08:50:12.273469911 +0000 UTC m=+0.052748602 container create 70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:50:12 compute-0 systemd[1]: Started libpod-conmon-70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082.scope.
Jan 27 08:50:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af8f8c6a0b92747708ec85a8f26352fa19b2905f8a5f4776dffb045d69c9954/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af8f8c6a0b92747708ec85a8f26352fa19b2905f8a5f4776dffb045d69c9954/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af8f8c6a0b92747708ec85a8f26352fa19b2905f8a5f4776dffb045d69c9954/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af8f8c6a0b92747708ec85a8f26352fa19b2905f8a5f4776dffb045d69c9954/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:12 compute-0 podman[250204]: 2026-01-27 08:50:12.248661299 +0000 UTC m=+0.027940080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:50:12 compute-0 podman[250204]: 2026-01-27 08:50:12.353449711 +0000 UTC m=+0.132728432 container init 70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:50:12 compute-0 podman[250204]: 2026-01-27 08:50:12.359870788 +0000 UTC m=+0.139149489 container start 70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 27 08:50:12 compute-0 podman[250204]: 2026-01-27 08:50:12.36286146 +0000 UTC m=+0.142140181 container attach 70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:50:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:13 compute-0 ceph-mon[74357]: pgmap v782: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:13 compute-0 reverent_merkle[250220]: {
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:         "osd_id": 0,
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:         "type": "bluestore"
Jan 27 08:50:13 compute-0 reverent_merkle[250220]:     }
Jan 27 08:50:13 compute-0 reverent_merkle[250220]: }
Jan 27 08:50:13 compute-0 systemd[1]: libpod-70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082.scope: Deactivated successfully.
Jan 27 08:50:13 compute-0 podman[250204]: 2026-01-27 08:50:13.240680351 +0000 UTC m=+1.019959042 container died 70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:50:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6af8f8c6a0b92747708ec85a8f26352fa19b2905f8a5f4776dffb045d69c9954-merged.mount: Deactivated successfully.
Jan 27 08:50:13 compute-0 podman[250204]: 2026-01-27 08:50:13.29330741 +0000 UTC m=+1.072586101 container remove 70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 08:50:13 compute-0 systemd[1]: libpod-conmon-70d93510dbb2d307f2e6dbb07bfc2c53d7b3254b8dddcafeb3b2c0d919e1e082.scope: Deactivated successfully.
Jan 27 08:50:13 compute-0 sudo[250100]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:50:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:50:13 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:13 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3329df43-7aef-4eee-b32e-5f52e10ed85c does not exist
Jan 27 08:50:13 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 30857079-05ec-4193-9a19-824397dc4865 does not exist
Jan 27 08:50:13 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9a42ee46-3098-4c61-be1a-28f0d4075cee does not exist
Jan 27 08:50:13 compute-0 sudo[250255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:13 compute-0 sudo[250255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:13 compute-0 sudo[250255]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:13 compute-0 sudo[250280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:50:13 compute-0 sudo[250280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:13 compute-0 sudo[250280]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:13.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:50:15
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'volumes']
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:50:15 compute-0 ceph-mon[74357]: pgmap v783: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:15.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:16 compute-0 ceph-mon[74357]: pgmap v784: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:16 compute-0 sshd-session[250307]: Connection closed by 64.89.160.135 port 58000
Jan 27 08:50:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:18 compute-0 ceph-mon[74357]: pgmap v785: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:20 compute-0 ceph-mon[74357]: pgmap v786: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:21 compute-0 sudo[250310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:21 compute-0 sudo[250310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:21 compute-0 sudo[250310]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:21 compute-0 sudo[250336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:21 compute-0 sudo[250336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:21 compute-0 sudo[250336]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:50:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:22.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:50:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:22 compute-0 ceph-mon[74357]: pgmap v787: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:23.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:24.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:50:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:50:24 compute-0 podman[250362]: 2026-01-27 08:50:24.277654386 +0000 UTC m=+0.092537397 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:50:24 compute-0 ceph-mon[74357]: pgmap v788: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:25.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:26 compute-0 ceph-mon[74357]: pgmap v789: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:27.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:28 compute-0 ceph-mon[74357]: pgmap v790: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:29.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:30.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:30 compute-0 ceph-mon[74357]: pgmap v791: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:32.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:33 compute-0 ceph-mon[74357]: pgmap v792: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:33.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:34 compute-0 ceph-mon[74357]: pgmap v793: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:36 compute-0 ceph-mon[74357]: pgmap v794: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:37.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:38 compute-0 podman[250396]: 2026-01-27 08:50:38.251219093 +0000 UTC m=+0.063403555 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:50:38 compute-0 ceph-mon[74357]: pgmap v795: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:40.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:40 compute-0 ceph-mon[74357]: pgmap v796: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:41 compute-0 sudo[250419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:41 compute-0 sudo[250419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:41 compute-0 sudo[250419]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:41 compute-0 sudo[250444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:50:41 compute-0 sudo[250444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:50:41 compute-0 sudo[250444]: pam_unix(sudo:session): session closed for user root
Jan 27 08:50:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:41.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:42.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:42 compute-0 ceph-mon[74357]: pgmap v797: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:43.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:44.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:44 compute-0 ceph-mon[74357]: pgmap v798: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:46.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:46 compute-0 ceph-mon[74357]: pgmap v799: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:47.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:48.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:48 compute-0 ceph-mon[74357]: pgmap v800: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:50.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:50 compute-0 ceph-mon[74357]: pgmap v801: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:51.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:53 compute-0 ceph-mon[74357]: pgmap v802: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:54.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:50:54.234 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:50:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:50:54.234 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:50:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:50:54.234 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:50:55 compute-0 ceph-mon[74357]: pgmap v803: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:55 compute-0 podman[250475]: 2026-01-27 08:50:55.31642812 +0000 UTC m=+0.114549731 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:50:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:56.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.709 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.709 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.746 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:56 compute-0 nova_compute[247671]: 2026-01-27 08:50:56.746 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:50:57 compute-0 ceph-mon[74357]: pgmap v804: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:50:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.497 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.497 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:50:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.672 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.673 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.673 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.673 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:50:57 compute-0 nova_compute[247671]: 2026-01-27 08:50:57.674 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:50:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:50:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:50:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:50:58 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024625918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.131 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:50:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:50:58.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.354 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.356 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5245MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.356 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.356 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.698 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.698 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:50:58 compute-0 nova_compute[247671]: 2026-01-27 08:50:58.712 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:50:59 compute-0 ceph-mon[74357]: pgmap v805: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1024625918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:50:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1555917127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:50:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:50:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039518824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:50:59 compute-0 nova_compute[247671]: 2026-01-27 08:50:59.175 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:50:59 compute-0 nova_compute[247671]: 2026-01-27 08:50:59.180 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:50:59 compute-0 nova_compute[247671]: 2026-01-27 08:50:59.217 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:50:59 compute-0 nova_compute[247671]: 2026-01-27 08:50:59.219 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:50:59 compute-0 nova_compute[247671]: 2026-01-27 08:50:59.219 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:50:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:50:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:50:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:50:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:50:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/970135978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:51:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/970135978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:51:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/124859043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:51:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1039518824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:51:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3612705443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:51:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:01 compute-0 ceph-mon[74357]: pgmap v806: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 27 08:51:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:01.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:01 compute-0 sudo[250551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:01 compute-0 sudo[250551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:01 compute-0 sudo[250551]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:02 compute-0 sudo[250576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:02 compute-0 sudo[250576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:02 compute-0 sudo[250576]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/935587676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:51:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:02.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:03 compute-0 ceph-mon[74357]: pgmap v807: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 27 08:51:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 27 08:51:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:03.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:04.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:05 compute-0 ceph-mon[74357]: pgmap v808: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 27 08:51:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 96 KiB/s rd, 0 B/s wr, 160 op/s
Jan 27 08:51:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:05.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:06.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:07 compute-0 ceph-mon[74357]: pgmap v809: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 96 KiB/s rd, 0 B/s wr, 160 op/s
Jan 27 08:51:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 27 08:51:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:07.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:08.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:09 compute-0 ceph-mon[74357]: pgmap v810: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 27 08:51:09 compute-0 podman[250605]: 2026-01-27 08:51:09.271790576 +0000 UTC m=+0.079990011 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:51:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 27 08:51:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:09.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:10.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:11 compute-0 ceph-mon[74357]: pgmap v811: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 27 08:51:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 27 08:51:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:12 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:51:12.858 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:51:12 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:51:12.859 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:51:12 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:51:12.860 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:51:13 compute-0 ceph-mon[74357]: pgmap v812: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 27 08:51:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Jan 27 08:51:13 compute-0 sudo[250625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:13 compute-0 sudo[250625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:13 compute-0 sudo[250625]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:13 compute-0 sudo[250650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:51:13 compute-0 sudo[250650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:13 compute-0 sudo[250650]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:14 compute-0 sudo[250675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:14 compute-0 sudo[250675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:14 compute-0 sudo[250675]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:14 compute-0 sudo[250700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:51:14 compute-0 sudo[250700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:51:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:51:14 compute-0 sudo[250700]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:51:14 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7f494e8f-d5f6-4ae4-b3ce-f5dc1970f9fb does not exist
Jan 27 08:51:14 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e8a7bf19-eaca-4f30-b3b0-5d2f6650e214 does not exist
Jan 27 08:51:14 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c6134f37-4927-4b3e-b2b1-d62014a0115e does not exist
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:51:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:51:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:51:14 compute-0 sudo[250757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:14 compute-0 sudo[250757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:14 compute-0 sudo[250757]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:14 compute-0 sudo[250782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:51:14 compute-0 sudo[250782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:14 compute-0 sudo[250782]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:14 compute-0 sudo[250807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:14 compute-0 sudo[250807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:14 compute-0 sudo[250807]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:14 compute-0 sudo[250832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:51:14 compute-0 sudo[250832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:51:15
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.log', 'backups', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:51:15 compute-0 ceph-mon[74357]: pgmap v813: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:51:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.299598327 +0000 UTC m=+0.047740025 container create e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:51:15 compute-0 systemd[1]: Started libpod-conmon-e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44.scope.
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.27536224 +0000 UTC m=+0.023503938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:51:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:51:15 compute-0 sshd-session[250526]: Invalid user  from 93.123.109.115 port 51276
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.387390372 +0000 UTC m=+0.135532060 container init e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dubinsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.395187187 +0000 UTC m=+0.143328845 container start e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.398181149 +0000 UTC m=+0.146322837 container attach e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dubinsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:51:15 compute-0 systemd[1]: libpod-e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44.scope: Deactivated successfully.
Jan 27 08:51:15 compute-0 angry_dubinsky[250913]: 167 167
Jan 27 08:51:15 compute-0 conmon[250913]: conmon e9c54049d39cebdbd3d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44.scope/container/memory.events
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.401608503 +0000 UTC m=+0.149750161 container died e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c07900cab5b796ff6f47064df497d2a098f69a9e043b6b852dfedc7511c2ba-merged.mount: Deactivated successfully.
Jan 27 08:51:15 compute-0 podman[250897]: 2026-01-27 08:51:15.439131806 +0000 UTC m=+0.187273474 container remove e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:51:15 compute-0 systemd[1]: libpod-conmon-e9c54049d39cebdbd3d1928ed254f1a2df9ad1351a7877ffe4357c1f0010bd44.scope: Deactivated successfully.
Jan 27 08:51:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Jan 27 08:51:15 compute-0 podman[250937]: 2026-01-27 08:51:15.578931271 +0000 UTC m=+0.036624848 container create 728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:51:15 compute-0 systemd[1]: Started libpod-conmon-728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f.scope.
Jan 27 08:51:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:51:15 compute-0 podman[250937]: 2026-01-27 08:51:15.563105887 +0000 UTC m=+0.020799484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293bf468453676b8d85b281c32d75651c326a7acd5c3a2453c77dba685332c0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293bf468453676b8d85b281c32d75651c326a7acd5c3a2453c77dba685332c0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293bf468453676b8d85b281c32d75651c326a7acd5c3a2453c77dba685332c0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293bf468453676b8d85b281c32d75651c326a7acd5c3a2453c77dba685332c0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293bf468453676b8d85b281c32d75651c326a7acd5c3a2453c77dba685332c0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:15 compute-0 podman[250937]: 2026-01-27 08:51:15.674683337 +0000 UTC m=+0.132376924 container init 728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:51:15 compute-0 podman[250937]: 2026-01-27 08:51:15.684009793 +0000 UTC m=+0.141703370 container start 728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:51:15 compute-0 podman[250937]: 2026-01-27 08:51:15.690693856 +0000 UTC m=+0.148387453 container attach 728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:51:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:16 compute-0 sshd-session[250526]: Connection closed by invalid user  93.123.109.115 port 51276 [preauth]
Jan 27 08:51:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:16 compute-0 frosty_allen[250955]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:51:16 compute-0 frosty_allen[250955]: --> relative data size: 1.0
Jan 27 08:51:16 compute-0 frosty_allen[250955]: --> All data devices are unavailable
Jan 27 08:51:16 compute-0 systemd[1]: libpod-728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f.scope: Deactivated successfully.
Jan 27 08:51:16 compute-0 podman[250937]: 2026-01-27 08:51:16.457400426 +0000 UTC m=+0.915094003 container died 728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-293bf468453676b8d85b281c32d75651c326a7acd5c3a2453c77dba685332c0d-merged.mount: Deactivated successfully.
Jan 27 08:51:16 compute-0 podman[250937]: 2026-01-27 08:51:16.531219694 +0000 UTC m=+0.988913271 container remove 728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:51:16 compute-0 systemd[1]: libpod-conmon-728a1b677fae711510a83fd984641fd278f8857fe5e4bc114dfcc7c49796c71f.scope: Deactivated successfully.
Jan 27 08:51:16 compute-0 sudo[250832]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:16 compute-0 sudo[250982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:16 compute-0 sudo[250982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:16 compute-0 sudo[250982]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:16 compute-0 sudo[251007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:51:16 compute-0 sudo[251007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:16 compute-0 sudo[251007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:16 compute-0 sudo[251032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:16 compute-0 sudo[251032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:16 compute-0 sudo[251032]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:16 compute-0 sudo[251057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:51:16 compute-0 sudo[251057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.159375878 +0000 UTC m=+0.037993706 container create 0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:51:17 compute-0 systemd[1]: Started libpod-conmon-0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e.scope.
Jan 27 08:51:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.237432534 +0000 UTC m=+0.116050352 container init 0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhabha, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.143386754 +0000 UTC m=+0.022004592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.243984166 +0000 UTC m=+0.122601984 container start 0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.246747832 +0000 UTC m=+0.125365670 container attach 0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhabha, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:51:17 compute-0 pedantic_bhabha[251138]: 167 167
Jan 27 08:51:17 compute-0 systemd[1]: libpod-0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e.scope: Deactivated successfully.
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.247734249 +0000 UTC m=+0.126352067 container died 0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhabha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b9d3a26cdc6da5c2d9331ec1ed7c9a285617f26f2d4c0666d56aaa71384ae6-merged.mount: Deactivated successfully.
Jan 27 08:51:17 compute-0 podman[251121]: 2026-01-27 08:51:17.277441744 +0000 UTC m=+0.156059562 container remove 0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhabha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:51:17 compute-0 systemd[1]: libpod-conmon-0baf6a2b2da74b2217ed73d1965eba52f53c9c8edde10d3f52b6b5ded74d672e.scope: Deactivated successfully.
Jan 27 08:51:17 compute-0 ceph-mon[74357]: pgmap v814: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Jan 27 08:51:17 compute-0 podman[251162]: 2026-01-27 08:51:17.416815762 +0000 UTC m=+0.035935888 container create 3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:51:17 compute-0 systemd[1]: Started libpod-conmon-3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718.scope.
Jan 27 08:51:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:51:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 27 08:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448f74d4522b6de61b2ad9dcf3d4a0403f64f049ae928ca5913e858877a88c39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448f74d4522b6de61b2ad9dcf3d4a0403f64f049ae928ca5913e858877a88c39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448f74d4522b6de61b2ad9dcf3d4a0403f64f049ae928ca5913e858877a88c39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448f74d4522b6de61b2ad9dcf3d4a0403f64f049ae928ca5913e858877a88c39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:17 compute-0 podman[251162]: 2026-01-27 08:51:17.495157267 +0000 UTC m=+0.114277403 container init 3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:51:17 compute-0 podman[251162]: 2026-01-27 08:51:17.401785345 +0000 UTC m=+0.020905511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:51:17 compute-0 podman[251162]: 2026-01-27 08:51:17.501445321 +0000 UTC m=+0.120565447 container start 3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:51:17 compute-0 podman[251162]: 2026-01-27 08:51:17.504094835 +0000 UTC m=+0.123215001 container attach 3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:51:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:18 compute-0 nifty_knuth[251179]: {
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:     "0": [
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:         {
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "devices": [
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "/dev/loop3"
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             ],
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "lv_name": "ceph_lv0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "lv_size": "7511998464",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "name": "ceph_lv0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "tags": {
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.cluster_name": "ceph",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.crush_device_class": "",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.encrypted": "0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.osd_id": "0",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.type": "block",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:                 "ceph.vdo": "0"
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             },
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "type": "block",
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:             "vg_name": "ceph_vg0"
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:         }
Jan 27 08:51:18 compute-0 nifty_knuth[251179]:     ]
Jan 27 08:51:18 compute-0 nifty_knuth[251179]: }
Jan 27 08:51:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:18.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:18 compute-0 systemd[1]: libpod-3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718.scope: Deactivated successfully.
Jan 27 08:51:18 compute-0 podman[251162]: 2026-01-27 08:51:18.216144025 +0000 UTC m=+0.835264151 container died 3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-448f74d4522b6de61b2ad9dcf3d4a0403f64f049ae928ca5913e858877a88c39-merged.mount: Deactivated successfully.
Jan 27 08:51:18 compute-0 podman[251162]: 2026-01-27 08:51:18.275482581 +0000 UTC m=+0.894602707 container remove 3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:51:18 compute-0 systemd[1]: libpod-conmon-3e7e2d5920e77370bc4d224c823bf95f71b3b93bd35e9c00d24109c6989a0718.scope: Deactivated successfully.
Jan 27 08:51:18 compute-0 sudo[251057]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:18 compute-0 sudo[251201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:18 compute-0 sudo[251201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:18 compute-0 sudo[251201]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:18 compute-0 sudo[251226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:51:18 compute-0 sudo[251226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:18 compute-0 sudo[251226]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:18 compute-0 sudo[251251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:18 compute-0 sudo[251251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:18 compute-0 sudo[251251]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:18 compute-0 sudo[251276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:51:18 compute-0 sudo[251276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.813664176 +0000 UTC m=+0.035233938 container create 533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:51:18 compute-0 systemd[1]: Started libpod-conmon-533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98.scope.
Jan 27 08:51:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.880163652 +0000 UTC m=+0.101733494 container init 533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.885532611 +0000 UTC m=+0.107102383 container start 533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:51:18 compute-0 adoring_perlman[251359]: 167 167
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.888687479 +0000 UTC m=+0.110257281 container attach 533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:51:18 compute-0 systemd[1]: libpod-533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98.scope: Deactivated successfully.
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.88982034 +0000 UTC m=+0.111390132 container died 533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.798802624 +0000 UTC m=+0.020372406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a1006bd107fbb0d8909a3009fde14300e3a83ca5092b1dfd1289463993011e-merged.mount: Deactivated successfully.
Jan 27 08:51:18 compute-0 podman[251342]: 2026-01-27 08:51:18.926633002 +0000 UTC m=+0.148202764 container remove 533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:51:18 compute-0 systemd[1]: libpod-conmon-533a76ed8a76e38ec0e3d4bf089a2b3dad56c8f2290f419a2334ef4579bc9b98.scope: Deactivated successfully.
Jan 27 08:51:19 compute-0 podman[251383]: 2026-01-27 08:51:19.09125382 +0000 UTC m=+0.036935756 container create 3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:51:19 compute-0 systemd[1]: Started libpod-conmon-3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd.scope.
Jan 27 08:51:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338bdc39d0573b941c32791e1471220a92372457a56b6cb7f43191af863b1d29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338bdc39d0573b941c32791e1471220a92372457a56b6cb7f43191af863b1d29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338bdc39d0573b941c32791e1471220a92372457a56b6cb7f43191af863b1d29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338bdc39d0573b941c32791e1471220a92372457a56b6cb7f43191af863b1d29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:19 compute-0 podman[251383]: 2026-01-27 08:51:19.161844979 +0000 UTC m=+0.107526915 container init 3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:51:19 compute-0 podman[251383]: 2026-01-27 08:51:19.169958724 +0000 UTC m=+0.115640660 container start 3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:51:19 compute-0 podman[251383]: 2026-01-27 08:51:19.172541096 +0000 UTC m=+0.118223042 container attach 3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 08:51:19 compute-0 podman[251383]: 2026-01-27 08:51:19.077503508 +0000 UTC m=+0.023185464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:51:19 compute-0 ceph-mon[74357]: pgmap v815: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 27 08:51:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:19 compute-0 quirky_darwin[251399]: {
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:         "osd_id": 0,
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:         "type": "bluestore"
Jan 27 08:51:19 compute-0 quirky_darwin[251399]:     }
Jan 27 08:51:19 compute-0 quirky_darwin[251399]: }
Jan 27 08:51:20 compute-0 systemd[1]: libpod-3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd.scope: Deactivated successfully.
Jan 27 08:51:20 compute-0 podman[251383]: 2026-01-27 08:51:20.023333518 +0000 UTC m=+0.969015454 container died 3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-338bdc39d0573b941c32791e1471220a92372457a56b6cb7f43191af863b1d29-merged.mount: Deactivated successfully.
Jan 27 08:51:20 compute-0 podman[251383]: 2026-01-27 08:51:20.078096558 +0000 UTC m=+1.023778494 container remove 3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:51:20 compute-0 systemd[1]: libpod-conmon-3b5435651b7b2ef140b84a221a25a0c2b0f570ed3b3b8cf10b8e22fa0f519bfd.scope: Deactivated successfully.
Jan 27 08:51:20 compute-0 sudo[251276]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:51:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:51:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:51:20 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:51:20 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 02ac9cdf-ea0c-4dca-b7c9-ac6bcfc0ea84 does not exist
Jan 27 08:51:20 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f3673686-2ff9-4d02-8ea9-46bcecf3b638 does not exist
Jan 27 08:51:20 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 4382aca5-983a-40da-9952-76f3ea22890b does not exist
Jan 27 08:51:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:51:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:51:20 compute-0 sudo[251431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:20 compute-0 sudo[251431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:20 compute-0 sudo[251431]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:20 compute-0 sudo[251456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:51:20 compute-0 sudo[251456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:20 compute-0 sudo[251456]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:21 compute-0 ceph-mon[74357]: pgmap v816: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:51:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:51:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:21.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:22 compute-0 sudo[251482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:22 compute-0 sudo[251482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:22 compute-0 sudo[251482]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:22 compute-0 sudo[251507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:22 compute-0 sudo[251507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:22 compute-0 sudo[251507]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:23 compute-0 ceph-mon[74357]: pgmap v817: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:23.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:24.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:51:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:51:25 compute-0 ceph-mon[74357]: pgmap v818: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:25.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:26.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:26 compute-0 podman[251534]: 2026-01-27 08:51:26.295816081 +0000 UTC m=+0.096769026 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:51:27 compute-0 ceph-mon[74357]: pgmap v819: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:29 compute-0 ceph-mon[74357]: pgmap v820: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:31 compute-0 ceph-mon[74357]: pgmap v821: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:33 compute-0 ceph-mon[74357]: pgmap v822: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:33.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:34.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:34 compute-0 ceph-mon[74357]: pgmap v823: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:36 compute-0 ceph-mon[74357]: pgmap v824: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:37.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:38 compute-0 ceph-mon[74357]: pgmap v825: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.560329) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503899560440, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 251, "total_data_size": 4015242, "memory_usage": 4089072, "flush_reason": "Manual Compaction"}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503899582724, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3928274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17259, "largest_seqno": 19368, "table_properties": {"data_size": 3918743, "index_size": 6089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18970, "raw_average_key_size": 20, "raw_value_size": 3899764, "raw_average_value_size": 4122, "num_data_blocks": 272, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503676, "oldest_key_time": 1769503676, "file_creation_time": 1769503899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 22431 microseconds, and 10142 cpu microseconds.
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.582774) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3928274 bytes OK
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.582796) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.584942) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.584957) EVENT_LOG_v1 {"time_micros": 1769503899584951, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.584974) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4006773, prev total WAL file size 4006773, number of live WAL files 2.
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.585908) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3836KB)], [41(7635KB)]
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503899585938, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11747461, "oldest_snapshot_seqno": -1}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4474 keys, 9714675 bytes, temperature: kUnknown
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503899637383, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9714675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9682139, "index_size": 20266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 111689, "raw_average_key_size": 24, "raw_value_size": 9598399, "raw_average_value_size": 2145, "num_data_blocks": 842, "num_entries": 4474, "num_filter_entries": 4474, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769503899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.637667) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9714675 bytes
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.638776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.0 rd, 188.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4993, records dropped: 519 output_compression: NoCompression
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.638804) EVENT_LOG_v1 {"time_micros": 1769503899638790, "job": 20, "event": "compaction_finished", "compaction_time_micros": 51533, "compaction_time_cpu_micros": 19707, "output_level": 6, "num_output_files": 1, "total_output_size": 9714675, "num_input_records": 4993, "num_output_records": 4474, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503899639831, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769503899641787, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.585815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.641844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.641866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.641868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.641870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:51:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:51:39.641872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:51:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:39.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:40 compute-0 podman[251569]: 2026-01-27 08:51:40.235287751 +0000 UTC m=+0.049464154 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:51:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:40.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:40 compute-0 ceph-mon[74357]: pgmap v826: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:41.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:42 compute-0 sudo[251589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:42 compute-0 sudo[251589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:42 compute-0 sudo[251589]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:42 compute-0 sudo[251614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:51:42 compute-0 sudo[251614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:51:42 compute-0 sudo[251614]: pam_unix(sudo:session): session closed for user root
Jan 27 08:51:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:42 compute-0 ceph-mon[74357]: pgmap v827: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:44.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:44.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:44 compute-0 ceph-mon[74357]: pgmap v828: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:46.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:46.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:46 compute-0 ceph-mon[74357]: pgmap v829: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:51:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:48.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:51:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:48.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:48 compute-0 ceph-mon[74357]: pgmap v830: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:50.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:50 compute-0 ceph-mon[74357]: pgmap v831: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:52.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:51:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:52.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:51:52 compute-0 ceph-mon[74357]: pgmap v832: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:54.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:51:54.234 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:51:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:51:54.234 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:51:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:51:54.235 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:51:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:54 compute-0 ceph-mon[74357]: pgmap v833: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:56.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:56 compute-0 ceph-mon[74357]: pgmap v834: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.144 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.144 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.144 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.145 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.145 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 podman[251646]: 2026-01-27 08:51:57.345665253 +0000 UTC m=+0.142285529 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:57 compute-0 nova_compute[247671]: 2026-01-27 08:51:57.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:51:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:51:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:51:58.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:51:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:51:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:51:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:51:58 compute-0 ceph-mon[74357]: pgmap v835: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.434 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.435 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:51:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.521 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.521 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.521 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.522 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.522 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:51:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3405781540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:51:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3405781540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:51:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1422402484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:51:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:51:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976714983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:51:59 compute-0 nova_compute[247671]: 2026-01-27 08:51:59.978 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:52:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:00.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.103 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.104 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5211MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.104 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.104 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.177 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.177 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.192 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:52:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:52:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689170614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.619 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:52:00 compute-0 nova_compute[247671]: 2026-01-27 08:52:00.625 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:52:00 compute-0 ceph-mon[74357]: pgmap v836: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2976714983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1719820927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3689170614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:01 compute-0 nova_compute[247671]: 2026-01-27 08:52:01.106 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:52:01 compute-0 nova_compute[247671]: 2026-01-27 08:52:01.107 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:52:01 compute-0 nova_compute[247671]: 2026-01-27 08:52:01.108 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:52:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:02.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:02 compute-0 sudo[251719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:02 compute-0 sudo[251719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:02 compute-0 sudo[251719]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:02 compute-0 sudo[251744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:02 compute-0 sudo[251744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:02 compute-0 sudo[251744]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:02.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:02 compute-0 ceph-mon[74357]: pgmap v837: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1065206885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2646633072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:04.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:04 compute-0 ceph-mon[74357]: pgmap v838: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:06.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:06 compute-0 ceph-mon[74357]: pgmap v839: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:08.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:08.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:09 compute-0 ceph-mon[74357]: pgmap v840: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:10.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:11 compute-0 ceph-mon[74357]: pgmap v841: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:11 compute-0 podman[251774]: 2026-01-27 08:52:11.266718753 +0000 UTC m=+0.084544257 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:52:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:13 compute-0 ceph-mon[74357]: pgmap v842: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:14.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:52:15
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'backups', 'default.rgw.meta', '.mgr']
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:15 compute-0 ceph-mon[74357]: pgmap v843: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:52:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:16.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:17 compute-0 ceph-mon[74357]: pgmap v844: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:19 compute-0 ceph-mon[74357]: pgmap v845: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:20.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:20 compute-0 sudo[251798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:20 compute-0 sudo[251798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:20 compute-0 sudo[251798]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:20 compute-0 sudo[251823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:52:20 compute-0 sudo[251823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:20 compute-0 sudo[251823]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:20 compute-0 sudo[251848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:20 compute-0 sudo[251848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:20 compute-0 sudo[251848]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:20 compute-0 sudo[251873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:52:20 compute-0 sudo[251873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:21 compute-0 ceph-mon[74357]: pgmap v846: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:21 compute-0 sudo[251873]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:52:21 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:52:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:52:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:52:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:52:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:52:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev dd911dcc-908b-4d54-b628-253108d83eb1 does not exist
Jan 27 08:52:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev cd22745d-3d87-4c7e-b140-2dee85a94949 does not exist
Jan 27 08:52:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev faf22973-da29-4158-921c-880b68264f5f does not exist
Jan 27 08:52:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:52:21 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:52:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:52:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:52:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:52:21 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:52:21 compute-0 sudo[251929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:21 compute-0 sudo[251929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:21 compute-0 sudo[251929]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:21 compute-0 sudo[251954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:52:21 compute-0 sudo[251954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:21 compute-0 sudo[251954]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:21 compute-0 sudo[251980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:21 compute-0 sudo[251980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:21 compute-0 sudo[251980]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:21 compute-0 sudo[252005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:52:21 compute-0 sudo[252005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.026822749 +0000 UTC m=+0.038374186 container create 2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:52:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:22 compute-0 systemd[1]: Started libpod-conmon-2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414.scope.
Jan 27 08:52:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.00920354 +0000 UTC m=+0.020754967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.115687966 +0000 UTC m=+0.127239413 container init 2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.123364239 +0000 UTC m=+0.134915666 container start 2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:52:22 compute-0 exciting_goldwasser[252088]: 167 167
Jan 27 08:52:22 compute-0 systemd[1]: libpod-2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414.scope: Deactivated successfully.
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.13457058 +0000 UTC m=+0.146122017 container attach 2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.135054973 +0000 UTC m=+0.146606420 container died 2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ffb99e6e9306aefb94b81a2b0c78168fa6892d7262d5486cf49a01c781135b2-merged.mount: Deactivated successfully.
Jan 27 08:52:22 compute-0 podman[252071]: 2026-01-27 08:52:22.252982946 +0000 UTC m=+0.264534373 container remove 2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:52:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:52:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:52:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:52:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:52:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:52:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:52:22 compute-0 systemd[1]: libpod-conmon-2de0d9662ef81de1ef76408d2141aa754d4aaffcd58b40645bb37749bbaab414.scope: Deactivated successfully.
Jan 27 08:52:22 compute-0 podman[252111]: 2026-01-27 08:52:22.396319164 +0000 UTC m=+0.036262188 container create 49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:52:22 compute-0 systemd[1]: Started libpod-conmon-49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8.scope.
Jan 27 08:52:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d413d3338f6dad69c26a881ee66a7405e5f733993fcabbb2deb5f4d8ff5b231/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d413d3338f6dad69c26a881ee66a7405e5f733993fcabbb2deb5f4d8ff5b231/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d413d3338f6dad69c26a881ee66a7405e5f733993fcabbb2deb5f4d8ff5b231/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d413d3338f6dad69c26a881ee66a7405e5f733993fcabbb2deb5f4d8ff5b231/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d413d3338f6dad69c26a881ee66a7405e5f733993fcabbb2deb5f4d8ff5b231/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:22 compute-0 podman[252111]: 2026-01-27 08:52:22.473211757 +0000 UTC m=+0.113154831 container init 49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 08:52:22 compute-0 podman[252111]: 2026-01-27 08:52:22.382161051 +0000 UTC m=+0.022104095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:52:22 compute-0 podman[252111]: 2026-01-27 08:52:22.48230141 +0000 UTC m=+0.122244434 container start 49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 08:52:22 compute-0 podman[252111]: 2026-01-27 08:52:22.486142506 +0000 UTC m=+0.126085550 container attach 49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:22.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:22 compute-0 sudo[252132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:22 compute-0 sudo[252132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:22 compute-0 sudo[252132]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:22 compute-0 sudo[252157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:22 compute-0 sudo[252157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:22 compute-0 sudo[252157]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:23 compute-0 hungry_yonath[252127]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:52:23 compute-0 hungry_yonath[252127]: --> relative data size: 1.0
Jan 27 08:52:23 compute-0 hungry_yonath[252127]: --> All data devices are unavailable
Jan 27 08:52:23 compute-0 systemd[1]: libpod-49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8.scope: Deactivated successfully.
Jan 27 08:52:23 compute-0 podman[252111]: 2026-01-27 08:52:23.223585602 +0000 UTC m=+0.863528626 container died 49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d413d3338f6dad69c26a881ee66a7405e5f733993fcabbb2deb5f4d8ff5b231-merged.mount: Deactivated successfully.
Jan 27 08:52:23 compute-0 ceph-mon[74357]: pgmap v847: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:23 compute-0 podman[252111]: 2026-01-27 08:52:23.273551989 +0000 UTC m=+0.913495013 container remove 49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:52:23 compute-0 systemd[1]: libpod-conmon-49cdeb3c52ebf607b182687e30b136b282560860418f2e1193ff161cc33c87b8.scope: Deactivated successfully.
Jan 27 08:52:23 compute-0 sudo[252005]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:23 compute-0 sudo[252204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:23 compute-0 sudo[252204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:23 compute-0 sudo[252204]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:23 compute-0 sudo[252229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:52:23 compute-0 sudo[252229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:23 compute-0 sudo[252229]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:23 compute-0 sudo[252254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:23 compute-0 sudo[252254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:23 compute-0 sudo[252254]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:23 compute-0 sudo[252279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:52:23 compute-0 sudo[252279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.851008234 +0000 UTC m=+0.040185056 container create fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:23 compute-0 systemd[1]: Started libpod-conmon-fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a.scope.
Jan 27 08:52:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.83681762 +0000 UTC m=+0.025994472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.935224021 +0000 UTC m=+0.124400853 container init fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.942763451 +0000 UTC m=+0.131940283 container start fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.946084903 +0000 UTC m=+0.135261735 container attach fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:52:23 compute-0 intelligent_villani[252362]: 167 167
Jan 27 08:52:23 compute-0 systemd[1]: libpod-fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a.scope: Deactivated successfully.
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.949009774 +0000 UTC m=+0.138186616 container died fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:52:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14c34424d58826faa2579ed99246b89ba49784a1209e09f0e89a3435a0b1bb3-merged.mount: Deactivated successfully.
Jan 27 08:52:23 compute-0 podman[252345]: 2026-01-27 08:52:23.992720097 +0000 UTC m=+0.181896929 container remove fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:24 compute-0 systemd[1]: libpod-conmon-fde56bcf623fb4e5cda9f0d9c1a8b9b354bbd1704011789839c6cbfd07a4275a.scope: Deactivated successfully.
Jan 27 08:52:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:24 compute-0 podman[252385]: 2026-01-27 08:52:24.184019486 +0000 UTC m=+0.052040525 container create 269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:52:24 compute-0 systemd[1]: Started libpod-conmon-269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db.scope.
Jan 27 08:52:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9787ac1c3dda95607d292e1df05d0200777f938a44b7735be92a1a2223fabcba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9787ac1c3dda95607d292e1df05d0200777f938a44b7735be92a1a2223fabcba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9787ac1c3dda95607d292e1df05d0200777f938a44b7735be92a1a2223fabcba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:24 compute-0 podman[252385]: 2026-01-27 08:52:24.163278271 +0000 UTC m=+0.031299360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9787ac1c3dda95607d292e1df05d0200777f938a44b7735be92a1a2223fabcba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:52:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:52:24 compute-0 podman[252385]: 2026-01-27 08:52:24.271348219 +0000 UTC m=+0.139369268 container init 269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:24 compute-0 podman[252385]: 2026-01-27 08:52:24.280083712 +0000 UTC m=+0.148104741 container start 269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:52:24 compute-0 podman[252385]: 2026-01-27 08:52:24.283751754 +0000 UTC m=+0.151772813 container attach 269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:25 compute-0 sharp_kirch[252401]: {
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:     "0": [
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:         {
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "devices": [
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "/dev/loop3"
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             ],
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "lv_name": "ceph_lv0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "lv_size": "7511998464",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "name": "ceph_lv0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "tags": {
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.cluster_name": "ceph",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.crush_device_class": "",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.encrypted": "0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.osd_id": "0",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.type": "block",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:                 "ceph.vdo": "0"
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             },
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "type": "block",
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:             "vg_name": "ceph_vg0"
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:         }
Jan 27 08:52:25 compute-0 sharp_kirch[252401]:     ]
Jan 27 08:52:25 compute-0 sharp_kirch[252401]: }
Jan 27 08:52:25 compute-0 systemd[1]: libpod-269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db.scope: Deactivated successfully.
Jan 27 08:52:25 compute-0 podman[252385]: 2026-01-27 08:52:25.063778391 +0000 UTC m=+0.931799430 container died 269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 08:52:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9787ac1c3dda95607d292e1df05d0200777f938a44b7735be92a1a2223fabcba-merged.mount: Deactivated successfully.
Jan 27 08:52:25 compute-0 podman[252385]: 2026-01-27 08:52:25.123408807 +0000 UTC m=+0.991429856 container remove 269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:52:25 compute-0 systemd[1]: libpod-conmon-269da1e32c8c42b03673f765d6a583af7491c74d02dd78c620d7bff7ee3a57db.scope: Deactivated successfully.
Jan 27 08:52:25 compute-0 sudo[252279]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:25 compute-0 sudo[252424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:25 compute-0 sudo[252424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:25 compute-0 sudo[252424]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:25 compute-0 sudo[252449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:52:25 compute-0 sudo[252449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:25 compute-0 ceph-mon[74357]: pgmap v848: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:25 compute-0 sudo[252449]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:25 compute-0 sudo[252474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:25 compute-0 sudo[252474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:25 compute-0 sudo[252474]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:25 compute-0 sudo[252499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:52:25 compute-0 sudo[252499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:25 compute-0 podman[252564]: 2026-01-27 08:52:25.798941494 +0000 UTC m=+0.064777168 container create 025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 08:52:25 compute-0 systemd[1]: Started libpod-conmon-025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268.scope.
Jan 27 08:52:25 compute-0 podman[252564]: 2026-01-27 08:52:25.757176855 +0000 UTC m=+0.023012559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:52:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:52:25 compute-0 podman[252564]: 2026-01-27 08:52:25.956838756 +0000 UTC m=+0.222674450 container init 025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 27 08:52:25 compute-0 podman[252564]: 2026-01-27 08:52:25.968600152 +0000 UTC m=+0.234435826 container start 025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:52:25 compute-0 eloquent_black[252580]: 167 167
Jan 27 08:52:25 compute-0 systemd[1]: libpod-025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268.scope: Deactivated successfully.
Jan 27 08:52:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:26.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:26 compute-0 podman[252564]: 2026-01-27 08:52:26.105193853 +0000 UTC m=+0.371029577 container attach 025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:52:26 compute-0 podman[252564]: 2026-01-27 08:52:26.10652964 +0000 UTC m=+0.372365314 container died 025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:52:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc49d0e922623db612f26101a4e3994974a3df077ad10b416b52ebe3ab430f84-merged.mount: Deactivated successfully.
Jan 27 08:52:26 compute-0 podman[252564]: 2026-01-27 08:52:26.158424921 +0000 UTC m=+0.424260595 container remove 025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:52:26 compute-0 systemd[1]: libpod-conmon-025a844e3e07e29db85d0e56c85b7c6656c6c80da9528f63cc0b18897ca22268.scope: Deactivated successfully.
Jan 27 08:52:26 compute-0 podman[252604]: 2026-01-27 08:52:26.310757668 +0000 UTC m=+0.040577437 container create 5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 08:52:26 compute-0 systemd[1]: Started libpod-conmon-5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a.scope.
Jan 27 08:52:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ab201a6d34e5b1cc5f1b2a434fea5437925d0f86f6c8e37802cd27cbb20f4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ab201a6d34e5b1cc5f1b2a434fea5437925d0f86f6c8e37802cd27cbb20f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ab201a6d34e5b1cc5f1b2a434fea5437925d0f86f6c8e37802cd27cbb20f4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ab201a6d34e5b1cc5f1b2a434fea5437925d0f86f6c8e37802cd27cbb20f4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:26 compute-0 podman[252604]: 2026-01-27 08:52:26.291795632 +0000 UTC m=+0.021615421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:52:26 compute-0 podman[252604]: 2026-01-27 08:52:26.390744258 +0000 UTC m=+0.120564047 container init 5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:26 compute-0 podman[252604]: 2026-01-27 08:52:26.398704539 +0000 UTC m=+0.128524308 container start 5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:52:26 compute-0 podman[252604]: 2026-01-27 08:52:26.40124806 +0000 UTC m=+0.131067859 container attach 5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:52:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]: {
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:         "osd_id": 0,
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:         "type": "bluestore"
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]:     }
Jan 27 08:52:27 compute-0 pedantic_zhukovsky[252621]: }
Jan 27 08:52:27 compute-0 systemd[1]: libpod-5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a.scope: Deactivated successfully.
Jan 27 08:52:27 compute-0 podman[252604]: 2026-01-27 08:52:27.26284222 +0000 UTC m=+0.992661989 container died 5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:52:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ab201a6d34e5b1cc5f1b2a434fea5437925d0f86f6c8e37802cd27cbb20f4d-merged.mount: Deactivated successfully.
Jan 27 08:52:27 compute-0 ceph-mon[74357]: pgmap v849: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:27 compute-0 podman[252604]: 2026-01-27 08:52:27.314968087 +0000 UTC m=+1.044787856 container remove 5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:27 compute-0 systemd[1]: libpod-conmon-5c4c1c4a3589d359aac4c2fd41557280515e2d48ee252739fd68bcccc5cc564a.scope: Deactivated successfully.
Jan 27 08:52:27 compute-0 sudo[252499]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:52:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:52:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:52:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:52:27 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e1e9d7aa-0ec9-4ba5-aea7-299421c4ef87 does not exist
Jan 27 08:52:27 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 21c28b7d-f489-4e66-b6a6-532d7957a238 does not exist
Jan 27 08:52:27 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 60e446b0-3da6-4dd7-8c6b-ecafa02b5e7a does not exist
Jan 27 08:52:27 compute-0 sudo[252655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:27 compute-0 sudo[252655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:27 compute-0 sudo[252655]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:27 compute-0 sudo[252685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:52:27 compute-0 sudo[252685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:27 compute-0 sudo[252685]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:27 compute-0 podman[252679]: 2026-01-27 08:52:27.576795633 +0000 UTC m=+0.086917073 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 27 08:52:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:28.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:52:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:52:28 compute-0 ceph-mon[74357]: pgmap v850: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=404 latency=0.002000055s ======
Jan 27 08:52:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:29.599 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.002000055s
Jan 27 08:52:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - - [27/Jan/2026:08:52:29.615 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 27 08:52:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:30.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:30 compute-0 ceph-mon[74357]: pgmap v851: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:32.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:32 compute-0 ceph-mon[74357]: pgmap v852: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:52:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 27 08:52:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 27 08:52:33 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 27 08:52:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 27 08:52:34 compute-0 ceph-mon[74357]: pgmap v853: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:34 compute-0 ceph-mon[74357]: osdmap e133: 3 total, 3 up, 3 in
Jan 27 08:52:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 27 08:52:34 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 27 08:52:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 27 08:52:35 compute-0 ceph-mon[74357]: osdmap e134: 3 total, 3 up, 3 in
Jan 27 08:52:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 27 08:52:35 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 27 08:52:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:36 compute-0 ceph-mon[74357]: pgmap v856: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:36 compute-0 ceph-mon[74357]: osdmap e135: 3 total, 3 up, 3 in
Jan 27 08:52:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 13 MiB data, 165 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Jan 27 08:52:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 27 08:52:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 27 08:52:37 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 27 08:52:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:38 compute-0 ceph-mon[74357]: pgmap v858: 305 pgs: 305 active+clean; 13 MiB data, 165 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Jan 27 08:52:38 compute-0 ceph-mon[74357]: osdmap e136: 3 total, 3 up, 3 in
Jan 27 08:52:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 13 MiB data, 165 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 24 op/s
Jan 27 08:52:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:40 compute-0 ceph-mon[74357]: pgmap v860: 305 pgs: 305 active+clean; 13 MiB data, 165 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 24 op/s
Jan 27 08:52:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Jan 27 08:52:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:42.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:42 compute-0 podman[252739]: 2026-01-27 08:52:42.252429442 +0000 UTC m=+0.068896533 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:52:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:42 compute-0 ceph-mon[74357]: pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Jan 27 08:52:42 compute-0 sudo[252758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:42 compute-0 sudo[252758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:42 compute-0 sudo[252758]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:42 compute-0 sudo[252783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:52:42 compute-0 sudo[252783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:52:42 compute-0 sudo[252783]: pam_unix(sudo:session): session closed for user root
Jan 27 08:52:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 08:52:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:44.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:44 compute-0 ceph-mon[74357]: pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 3.3 MiB/s wr, 25 op/s
Jan 27 08:52:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:46.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:46.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:46 compute-0 ceph-mon[74357]: pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 3.3 MiB/s wr, 25 op/s
Jan 27 08:52:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 23 op/s
Jan 27 08:52:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:48.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:48 compute-0 ceph-mon[74357]: pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 23 op/s
Jan 27 08:52:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.4 MiB/s wr, 20 op/s
Jan 27 08:52:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:52:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:52:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:50 compute-0 ceph-mon[74357]: pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.4 MiB/s wr, 20 op/s
Jan 27 08:52:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.4 MiB/s wr, 19 op/s
Jan 27 08:52:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:52.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:52 compute-0 ceph-mon[74357]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.4 MiB/s wr, 19 op/s
Jan 27 08:52:53 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:52:53.365 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:52:53 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:52:53.366 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:52:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:54.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:52:54.235 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:52:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:52:54.236 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:52:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:52:54.236 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:52:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:52:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:52:54 compute-0 ceph-mon[74357]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:56 compute-0 ceph-mon[74357]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:57 compute-0 nova_compute[247671]: 2026-01-27 08:52:57.096 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:57 compute-0 nova_compute[247671]: 2026-01-27 08:52:57.096 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:57 compute-0 nova_compute[247671]: 2026-01-27 08:52:57.096 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:57 compute-0 nova_compute[247671]: 2026-01-27 08:52:57.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:57 compute-0 nova_compute[247671]: 2026-01-27 08:52:57.443 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:57 compute-0 nova_compute[247671]: 2026-01-27 08:52:57.444 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:52:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:52:58.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:58 compute-0 podman[252816]: 2026-01-27 08:52:58.257807085 +0000 UTC m=+0.074868658 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:52:58 compute-0 nova_compute[247671]: 2026-01-27 08:52:58.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:58 compute-0 nova_compute[247671]: 2026-01-27 08:52:58.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:58 compute-0 nova_compute[247671]: 2026-01-27 08:52:58.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:52:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:52:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:52:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:52:59 compute-0 ceph-mon[74357]: pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:52:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1157695741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:52:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 08:52:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3971193749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:52:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 08:52:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3971193749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:52:59 compute-0 nova_compute[247671]: 2026-01-27 08:52:59.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:52:59 compute-0 nova_compute[247671]: 2026-01-27 08:52:59.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:52:59 compute-0 nova_compute[247671]: 2026-01-27 08:52:59.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:52:59 compute-0 nova_compute[247671]: 2026-01-27 08:52:59.445 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:52:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3971193749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:53:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3971193749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:53:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2283658297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1993219723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.448 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.449 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:53:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:53:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591663208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:00 compute-0 nova_compute[247671]: 2026-01-27 08:53:00.913 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:53:01 compute-0 ceph-mon[74357]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3548446221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1591663208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.073 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.074 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5219MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.074 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.074 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.151 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.152 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.178 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:53:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:53:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204908035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.598 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.603 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.632 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.633 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:53:01 compute-0 nova_compute[247671]: 2026-01-27 08:53:01.633 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:53:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4204908035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:02.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:02 compute-0 sudo[252888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:02 compute-0 sudo[252888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:02 compute-0 sudo[252888]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:02 compute-0 sudo[252913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:02 compute-0 sudo[252913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:02 compute-0 sudo[252913]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:03 compute-0 ceph-mon[74357]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:03 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:53:03.368 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:53:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:05 compute-0 ceph-mon[74357]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:07 compute-0 ceph-mon[74357]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:08.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:09 compute-0 ceph-mon[74357]: pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:10.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:11 compute-0 ceph-mon[74357]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:12.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:12.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:13 compute-0 ceph-mon[74357]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:13 compute-0 podman[252943]: 2026-01-27 08:53:13.305382211 +0000 UTC m=+0.117296976 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 08:53:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:14.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:14.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:53:15
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:53:15 compute-0 ceph-mon[74357]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:16.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:17 compute-0 ceph-mon[74357]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:53:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:18.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:53:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:18 compute-0 ceph-mon[74357]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:53:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:20.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:53:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:20.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:20 compute-0 ceph-mon[74357]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 27 08:53:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 27 08:53:21 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 27 08:53:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:22.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:22.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:22 compute-0 ceph-mon[74357]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:22 compute-0 ceph-mon[74357]: osdmap e137: 3 total, 3 up, 3 in
Jan 27 08:53:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 27 08:53:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 27 08:53:22 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 27 08:53:23 compute-0 sudo[252964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:23 compute-0 sudo[252964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:23 compute-0 sudo[252964]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:23 compute-0 sudo[252989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:23 compute-0 sudo[252989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:23 compute-0 sudo[252989]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:23 compute-0 ceph-mon[74357]: osdmap e138: 3 total, 3 up, 3 in
Jan 27 08:53:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:24.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:53:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:53:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:53:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:24.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:53:25 compute-0 ceph-mon[74357]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 511 B/s wr, 0 op/s
Jan 27 08:53:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:26.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:26.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:27 compute-0 ceph-mon[74357]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 511 B/s wr, 0 op/s
Jan 27 08:53:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 27 08:53:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:27 compute-0 sudo[253017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:27 compute-0 sudo[253017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:27 compute-0 sudo[253017]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:27 compute-0 sudo[253042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:53:27 compute-0 sudo[253042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:28 compute-0 sudo[253042]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:28 compute-0 sudo[253067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:28 compute-0 sudo[253067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:28 compute-0 sudo[253067]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:28.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:28 compute-0 sudo[253092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 08:53:28 compute-0 sudo[253092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:28 compute-0 podman[253160]: 2026-01-27 08:53:28.622455289 +0000 UTC m=+0.098383735 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:53:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:28.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:28 compute-0 podman[253215]: 2026-01-27 08:53:28.755576835 +0000 UTC m=+0.093397676 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:53:28 compute-0 podman[253215]: 2026-01-27 08:53:28.848817676 +0000 UTC m=+0.186638567 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:53:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:53:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:53:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 27 08:53:29 compute-0 ceph-mon[74357]: pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 27 08:53:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:29 compute-0 podman[253351]: 2026-01-27 08:53:29.615437353 +0000 UTC m=+0.122727812 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:53:29 compute-0 podman[253351]: 2026-01-27 08:53:29.648221564 +0000 UTC m=+0.155511983 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 08:53:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:30.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:30 compute-0 podman[253417]: 2026-01-27 08:53:30.232108881 +0000 UTC m=+0.294010327 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 27 08:53:30 compute-0 podman[253438]: 2026-01-27 08:53:30.303148402 +0000 UTC m=+0.052432870 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 27 08:53:30 compute-0 podman[253417]: 2026-01-27 08:53:30.364994132 +0000 UTC m=+0.426895548 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, com.redhat.component=keepalived-container, vcs-type=git, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived)
Jan 27 08:53:30 compute-0 sudo[253092]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:53:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:53:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:30 compute-0 ceph-mon[74357]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 27 08:53:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:30.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:30 compute-0 sudo[253470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:30 compute-0 sudo[253470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:30 compute-0 sudo[253470]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:30 compute-0 sudo[253495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:53:30 compute-0 sudo[253495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:30 compute-0 sudo[253495]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:30 compute-0 sudo[253520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:30 compute-0 sudo[253520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:30 compute-0 sudo[253520]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:30 compute-0 sudo[253545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:53:30 compute-0 sudo[253545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:31 compute-0 sudo[253545]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 316 B/s rd, 632 B/s wr, 1 op/s
Jan 27 08:53:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:53:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:53:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:53:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:53:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:53:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:31 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e004d563-6723-4ecc-acdc-d7250eb35359 does not exist
Jan 27 08:53:31 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 28075f7a-baa5-4be3-8c95-02f19a200821 does not exist
Jan 27 08:53:31 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f13b98a9-bd35-432b-829c-409e9dfa94b8 does not exist
Jan 27 08:53:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:53:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:53:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:53:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:53:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:53:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:53:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:53:31 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:53:31 compute-0 sudo[253601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:31 compute-0 sudo[253601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:31 compute-0 sudo[253601]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:31 compute-0 sudo[253626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:53:31 compute-0 sudo[253626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:31 compute-0 sudo[253626]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:31 compute-0 sudo[253651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:31 compute-0 sudo[253651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:31 compute-0 sudo[253651]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:32 compute-0 sudo[253676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:53:32 compute-0 sudo[253676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:53:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:32.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.433359954 +0000 UTC m=+0.050105497 container create 3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:53:32 compute-0 systemd[1]: Started libpod-conmon-3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd.scope.
Jan 27 08:53:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.410730493 +0000 UTC m=+0.027476086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.512157819 +0000 UTC m=+0.128903432 container init 3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.517348171 +0000 UTC m=+0.134093714 container start 3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:53:32 compute-0 lucid_sanderson[253756]: 167 167
Jan 27 08:53:32 compute-0 systemd[1]: libpod-3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd.scope: Deactivated successfully.
Jan 27 08:53:32 compute-0 conmon[253756]: conmon 3a15cfe0cc5f823d60e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd.scope/container/memory.events
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.522693908 +0000 UTC m=+0.139439491 container attach 3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.522977906 +0000 UTC m=+0.139723469 container died 3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-760d553781b973707045a35cb84d61405ed2ef0cd5739d281c8fb30e9578ee0d-merged.mount: Deactivated successfully.
Jan 27 08:53:32 compute-0 podman[253740]: 2026-01-27 08:53:32.571273992 +0000 UTC m=+0.188019535 container remove 3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:53:32 compute-0 systemd[1]: libpod-conmon-3a15cfe0cc5f823d60e0f5db9f94eb06b83975ce6c7199a9ba8deb46f6f70edd.scope: Deactivated successfully.
Jan 27 08:53:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:32.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 27 08:53:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 27 08:53:32 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 27 08:53:32 compute-0 ceph-mon[74357]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 316 B/s rd, 632 B/s wr, 1 op/s
Jan 27 08:53:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:53:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:53:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:53:32 compute-0 ceph-mon[74357]: osdmap e139: 3 total, 3 up, 3 in
Jan 27 08:53:32 compute-0 podman[253779]: 2026-01-27 08:53:32.748919462 +0000 UTC m=+0.070074005 container create a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:53:32 compute-0 systemd[1]: Started libpod-conmon-a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6.scope.
Jan 27 08:53:32 compute-0 podman[253779]: 2026-01-27 08:53:32.721492279 +0000 UTC m=+0.042646912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:53:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e7a3ec1502211549a18f0f99fa344713c6989cbc91eb0dcbd7d2936794cf71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e7a3ec1502211549a18f0f99fa344713c6989cbc91eb0dcbd7d2936794cf71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e7a3ec1502211549a18f0f99fa344713c6989cbc91eb0dcbd7d2936794cf71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e7a3ec1502211549a18f0f99fa344713c6989cbc91eb0dcbd7d2936794cf71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e7a3ec1502211549a18f0f99fa344713c6989cbc91eb0dcbd7d2936794cf71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:32 compute-0 podman[253779]: 2026-01-27 08:53:32.844451496 +0000 UTC m=+0.165606059 container init a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:53:32 compute-0 podman[253779]: 2026-01-27 08:53:32.850676617 +0000 UTC m=+0.171831160 container start a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:53:32 compute-0 podman[253779]: 2026-01-27 08:53:32.854905153 +0000 UTC m=+0.176059716 container attach a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:53:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 614 B/s wr, 1 op/s
Jan 27 08:53:33 compute-0 wizardly_colden[253795]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:53:33 compute-0 wizardly_colden[253795]: --> relative data size: 1.0
Jan 27 08:53:33 compute-0 wizardly_colden[253795]: --> All data devices are unavailable
Jan 27 08:53:33 compute-0 systemd[1]: libpod-a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6.scope: Deactivated successfully.
Jan 27 08:53:33 compute-0 conmon[253795]: conmon a4163154c275f4b172a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6.scope/container/memory.events
Jan 27 08:53:33 compute-0 podman[253779]: 2026-01-27 08:53:33.627060722 +0000 UTC m=+0.948215265 container died a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e7a3ec1502211549a18f0f99fa344713c6989cbc91eb0dcbd7d2936794cf71-merged.mount: Deactivated successfully.
Jan 27 08:53:33 compute-0 podman[253779]: 2026-01-27 08:53:33.813256846 +0000 UTC m=+1.134411379 container remove a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:53:33 compute-0 systemd[1]: libpod-conmon-a4163154c275f4b172a488c5a1dd7e47b6a75610d3b0ede31bf3af7d07664bf6.scope: Deactivated successfully.
Jan 27 08:53:33 compute-0 sudo[253676]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:33 compute-0 sudo[253824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:33 compute-0 sudo[253824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:33 compute-0 sudo[253824]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:34 compute-0 sudo[253849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:53:34 compute-0 sudo[253849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:34 compute-0 sudo[253849]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:34 compute-0 sudo[253874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:34 compute-0 sudo[253874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:34 compute-0 sudo[253874]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:34.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:34 compute-0 sudo[253899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:53:34 compute-0 sudo[253899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.481235485 +0000 UTC m=+0.037666327 container create cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:53:34 compute-0 systemd[1]: Started libpod-conmon-cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086.scope.
Jan 27 08:53:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.556984625 +0000 UTC m=+0.113415477 container init cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.463612201 +0000 UTC m=+0.020043083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.565391616 +0000 UTC m=+0.121822448 container start cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:53:34 compute-0 mystifying_banzai[253980]: 167 167
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.5706159 +0000 UTC m=+0.127046782 container attach cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:53:34 compute-0 systemd[1]: libpod-cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086.scope: Deactivated successfully.
Jan 27 08:53:34 compute-0 conmon[253980]: conmon cb18cd642ebec17e3775 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086.scope/container/memory.events
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.571855653 +0000 UTC m=+0.128286535 container died cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banzai, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:53:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e394ed30e34d71b271075c9e3e60306d04350346d008bd2f7e0d5306477c7cdc-merged.mount: Deactivated successfully.
Jan 27 08:53:34 compute-0 podman[253964]: 2026-01-27 08:53:34.624600912 +0000 UTC m=+0.181031744 container remove cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:53:34 compute-0 systemd[1]: libpod-conmon-cb18cd642ebec17e37759651da30fac9ada471fda6dcbe8d059d665918403086.scope: Deactivated successfully.
Jan 27 08:53:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:34.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:34 compute-0 ceph-mon[74357]: pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 614 B/s wr, 1 op/s
Jan 27 08:53:34 compute-0 podman[254004]: 2026-01-27 08:53:34.806829788 +0000 UTC m=+0.059550027 container create 45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:53:34 compute-0 systemd[1]: Started libpod-conmon-45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e.scope.
Jan 27 08:53:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ac3cc5d85cbf7c63a8bafba8870fc6423286d261150a268190026be616cde7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ac3cc5d85cbf7c63a8bafba8870fc6423286d261150a268190026be616cde7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ac3cc5d85cbf7c63a8bafba8870fc6423286d261150a268190026be616cde7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ac3cc5d85cbf7c63a8bafba8870fc6423286d261150a268190026be616cde7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:34 compute-0 podman[254004]: 2026-01-27 08:53:34.788660809 +0000 UTC m=+0.041381058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:53:34 compute-0 podman[254004]: 2026-01-27 08:53:34.895712499 +0000 UTC m=+0.148432818 container init 45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:53:34 compute-0 podman[254004]: 2026-01-27 08:53:34.906980389 +0000 UTC m=+0.159700648 container start 45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:53:34 compute-0 podman[254004]: 2026-01-27 08:53:34.91140896 +0000 UTC m=+0.164129279 container attach 45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:53:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 204 B/s wr, 0 op/s
Jan 27 08:53:35 compute-0 affectionate_cray[254020]: {
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:     "0": [
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:         {
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "devices": [
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "/dev/loop3"
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             ],
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "lv_name": "ceph_lv0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "lv_size": "7511998464",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "name": "ceph_lv0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "tags": {
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.cluster_name": "ceph",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.crush_device_class": "",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.encrypted": "0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.osd_id": "0",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.type": "block",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:                 "ceph.vdo": "0"
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             },
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "type": "block",
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:             "vg_name": "ceph_vg0"
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:         }
Jan 27 08:53:35 compute-0 affectionate_cray[254020]:     ]
Jan 27 08:53:35 compute-0 affectionate_cray[254020]: }
Jan 27 08:53:35 compute-0 podman[254004]: 2026-01-27 08:53:35.716046232 +0000 UTC m=+0.968766451 container died 45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:53:35 compute-0 systemd[1]: libpod-45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e.scope: Deactivated successfully.
Jan 27 08:53:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5ac3cc5d85cbf7c63a8bafba8870fc6423286d261150a268190026be616cde7-merged.mount: Deactivated successfully.
Jan 27 08:53:35 compute-0 podman[254004]: 2026-01-27 08:53:35.778629681 +0000 UTC m=+1.031349900 container remove 45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 08:53:35 compute-0 systemd[1]: libpod-conmon-45870f5f2ad6870be817237252391fff8ff24391000542d182081df85941e97e.scope: Deactivated successfully.
Jan 27 08:53:35 compute-0 sudo[253899]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:35 compute-0 sudo[254044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:35 compute-0 sudo[254044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:35 compute-0 sudo[254044]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:35 compute-0 sudo[254069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:53:35 compute-0 sudo[254069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:35 compute-0 sudo[254069]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:36 compute-0 sudo[254094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:36 compute-0 sudo[254094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:36 compute-0 sudo[254094]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:36 compute-0 sudo[254119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:53:36 compute-0 sudo[254119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:36.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.448074159 +0000 UTC m=+0.063225798 container create 76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:53:36 compute-0 systemd[1]: Started libpod-conmon-76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c.scope.
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.42007561 +0000 UTC m=+0.035227339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:53:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.532261701 +0000 UTC m=+0.147413430 container init 76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.538287607 +0000 UTC m=+0.153439236 container start 76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.541069583 +0000 UTC m=+0.156221322 container attach 76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:53:36 compute-0 xenodochial_ritchie[254200]: 167 167
Jan 27 08:53:36 compute-0 systemd[1]: libpod-76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c.scope: Deactivated successfully.
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.542545444 +0000 UTC m=+0.157697073 container died 76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0c2cef99d378a03de1b55f4a0ee0d9f303839820b108e6ea98656e371e97c9-merged.mount: Deactivated successfully.
Jan 27 08:53:36 compute-0 podman[254184]: 2026-01-27 08:53:36.575351274 +0000 UTC m=+0.190502903 container remove 76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:53:36 compute-0 systemd[1]: libpod-conmon-76339e6070e7fd3ae38103fdf4ae97c43aee4eca3a81bb3d96ef1e4a78ee9d1c.scope: Deactivated successfully.
Jan 27 08:53:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:36.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:36 compute-0 podman[254224]: 2026-01-27 08:53:36.751254906 +0000 UTC m=+0.064512633 container create 9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 08:53:36 compute-0 systemd[1]: Started libpod-conmon-9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465.scope.
Jan 27 08:53:36 compute-0 ceph-mon[74357]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 204 B/s wr, 0 op/s
Jan 27 08:53:36 compute-0 podman[254224]: 2026-01-27 08:53:36.727291639 +0000 UTC m=+0.040549456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:53:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760ea26c0729e0c17755dcd9748773ccbd835327cd07ff451fc598a01f123ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760ea26c0729e0c17755dcd9748773ccbd835327cd07ff451fc598a01f123ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760ea26c0729e0c17755dcd9748773ccbd835327cd07ff451fc598a01f123ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760ea26c0729e0c17755dcd9748773ccbd835327cd07ff451fc598a01f123ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:36 compute-0 podman[254224]: 2026-01-27 08:53:36.846836451 +0000 UTC m=+0.160094198 container init 9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:53:36 compute-0 podman[254224]: 2026-01-27 08:53:36.852418215 +0000 UTC m=+0.165675952 container start 9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:53:36 compute-0 podman[254224]: 2026-01-27 08:53:36.855679434 +0000 UTC m=+0.168937151 container attach 9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:53:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:37 compute-0 inspiring_wing[254240]: {
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:         "osd_id": 0,
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:         "type": "bluestore"
Jan 27 08:53:37 compute-0 inspiring_wing[254240]:     }
Jan 27 08:53:37 compute-0 inspiring_wing[254240]: }
Jan 27 08:53:37 compute-0 systemd[1]: libpod-9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465.scope: Deactivated successfully.
Jan 27 08:53:37 compute-0 podman[254224]: 2026-01-27 08:53:37.735174642 +0000 UTC m=+1.048432369 container died 9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7760ea26c0729e0c17755dcd9748773ccbd835327cd07ff451fc598a01f123ba-merged.mount: Deactivated successfully.
Jan 27 08:53:37 compute-0 podman[254224]: 2026-01-27 08:53:37.787769907 +0000 UTC m=+1.101027634 container remove 9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:53:37 compute-0 systemd[1]: libpod-conmon-9b84d8573c2fbae517db28bf4910feac27cf854a5a6f8ffed6fd8afb7f0ca465.scope: Deactivated successfully.
Jan 27 08:53:37 compute-0 sudo[254119]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:53:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:53:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b5cda4c9-05b4-4bf2-b598-74e41ad5c760 does not exist
Jan 27 08:53:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 85da2a10-e34a-46f2-8325-99d09ad2af6a does not exist
Jan 27 08:53:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 408ebf4c-28b5-45f9-afa7-ad75648f38df does not exist
Jan 27 08:53:37 compute-0 sudo[254277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:37 compute-0 sudo[254277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:37 compute-0 sudo[254277]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:37 compute-0 sudo[254302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:53:37 compute-0 sudo[254302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:37 compute-0 sudo[254302]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:38.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:38.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:38 compute-0 ceph-mon[74357]: pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:53:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1186718358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:53:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1186718358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:53:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:40.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:40.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:40 compute-0 ceph-mon[74357]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 16 op/s
Jan 27 08:53:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:42.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:42.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:42 compute-0 ceph-mon[74357]: pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 16 op/s
Jan 27 08:53:43 compute-0 sudo[254329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:43 compute-0 sudo[254329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:43 compute-0 sudo[254329]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:43 compute-0 sudo[254354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:53:43 compute-0 sudo[254354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:53:43 compute-0 sudo[254354]: pam_unix(sudo:session): session closed for user root
Jan 27 08:53:43 compute-0 podman[254378]: 2026-01-27 08:53:43.460818534 +0000 UTC m=+0.067044212 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:53:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 283 B/s wr, 14 op/s
Jan 27 08:53:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:44.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:45 compute-0 ceph-mon[74357]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 283 B/s wr, 14 op/s
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:46.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:47 compute-0 ceph-mon[74357]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:48.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:49 compute-0 ceph-mon[74357]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:50.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:51 compute-0 ceph-mon[74357]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:52.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:52.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:53 compute-0 ceph-mon[74357]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 08:53:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:53:54.237 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:53:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:53:54.237 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:53:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:53:54.237 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:53:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:54.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:53:54.978 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:53:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:53:54.979 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:53:55 compute-0 ceph-mon[74357]: pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.449 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.450 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.451 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 08:53:55 compute-0 nova_compute[247671]: 2026-01-27 08:53:55.467 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:53:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:56.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:53:57 compute-0 ceph-mon[74357]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:57 compute-0 nova_compute[247671]: 2026-01-27 08:53:57.497 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:53:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:53:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:53:58 compute-0 nova_compute[247671]: 2026-01-27 08:53:58.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:58 compute-0 nova_compute[247671]: 2026-01-27 08:53:58.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:58 compute-0 nova_compute[247671]: 2026-01-27 08:53:58.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:58 compute-0 nova_compute[247671]: 2026-01-27 08:53:58.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:53:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:53:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:53:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:53:58.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:53:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:53:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1623639513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:59 compute-0 ceph-mon[74357]: pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:53:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1883292831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:53:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1883292831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:53:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1623639513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:53:59 compute-0 podman[254407]: 2026-01-27 08:53:59.349145303 +0000 UTC m=+0.157702993 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 08:53:59 compute-0 nova_compute[247671]: 2026-01-27 08:53:59.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:53:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:00.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/434888357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3249805879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:00 compute-0 nova_compute[247671]: 2026-01-27 08:54:00.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:00 compute-0 nova_compute[247671]: 2026-01-27 08:54:00.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:54:00 compute-0 nova_compute[247671]: 2026-01-27 08:54:00.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:54:00 compute-0 nova_compute[247671]: 2026-01-27 08:54:00.465 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:54:00 compute-0 nova_compute[247671]: 2026-01-27 08:54:00.466 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:00.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:01 compute-0 ceph-mon[74357]: pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1626718405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:02.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.486 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.487 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.487 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.487 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.487 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:54:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:54:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14146149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:02 compute-0 nova_compute[247671]: 2026-01-27 08:54:02.902 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.052 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.053 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5216MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.053 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.054 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:54:03 compute-0 ceph-mon[74357]: pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/14146149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.383 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.384 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.404 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:54:03 compute-0 sudo[254458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:03 compute-0 sudo[254458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:03 compute-0 sudo[254458]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:03 compute-0 sudo[254484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:03 compute-0 sudo[254484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:03 compute-0 sudo[254484]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:03 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:54:03 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540522801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.811 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.816 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.886 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.887 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:54:03 compute-0 nova_compute[247671]: 2026-01-27 08:54:03.887 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:54:03 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:54:03.980 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:54:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/540522801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:54:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:04.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:05 compute-0 ceph-mon[74357]: pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:06 compute-0 ceph-mon[74357]: pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:06.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.729568) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504047729610, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1574, "num_deletes": 252, "total_data_size": 2726627, "memory_usage": 2756720, "flush_reason": "Manual Compaction"}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504047739629, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1635187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19370, "largest_seqno": 20942, "table_properties": {"data_size": 1629623, "index_size": 2768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14215, "raw_average_key_size": 20, "raw_value_size": 1617321, "raw_average_value_size": 2343, "num_data_blocks": 124, "num_entries": 690, "num_filter_entries": 690, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769503899, "oldest_key_time": 1769503899, "file_creation_time": 1769504047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10100 microseconds, and 4322 cpu microseconds.
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.739670) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1635187 bytes OK
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.739689) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.741030) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.741052) EVENT_LOG_v1 {"time_micros": 1769504047741046, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.741068) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2719977, prev total WAL file size 2719977, number of live WAL files 2.
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.742111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1596KB)], [44(9486KB)]
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504047742148, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11349862, "oldest_snapshot_seqno": -1}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4709 keys, 8501040 bytes, temperature: kUnknown
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504047781386, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8501040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8469408, "index_size": 18786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 116976, "raw_average_key_size": 24, "raw_value_size": 8383965, "raw_average_value_size": 1780, "num_data_blocks": 779, "num_entries": 4709, "num_filter_entries": 4709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.781584) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8501040 bytes
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.782817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 288.8 rd, 216.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(12.1) write-amplify(5.2) OK, records in: 5164, records dropped: 455 output_compression: NoCompression
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.782833) EVENT_LOG_v1 {"time_micros": 1769504047782825, "job": 22, "event": "compaction_finished", "compaction_time_micros": 39297, "compaction_time_cpu_micros": 17466, "output_level": 6, "num_output_files": 1, "total_output_size": 8501040, "num_input_records": 5164, "num_output_records": 4709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504047783166, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504047784981, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.742023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.785089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.785096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.785098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.785100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:54:07 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:54:07.785101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:54:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:08.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:08.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:08 compute-0 ceph-mon[74357]: pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:10.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:10 compute-0 ceph-mon[74357]: pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 08:54:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:12.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:12 compute-0 ceph-mon[74357]: pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 08:54:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 08:54:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:14.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:14 compute-0 podman[254536]: 2026-01-27 08:54:14.237937964 +0000 UTC m=+0.048516494 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 08:54:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:15 compute-0 ceph-mon[74357]: pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:54:15
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log']
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:54:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 27 08:54:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:16.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:17 compute-0 ceph-mon[74357]: pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 27 08:54:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 08:54:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:54:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:18.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:54:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:19 compute-0 ceph-mon[74357]: pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 08:54:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 08:54:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:20.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:21 compute-0 ceph-mon[74357]: pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 08:54:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:54:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:54:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:22.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:54:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:23 compute-0 ceph-mon[74357]: pgmap v914: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:54:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:54:23 compute-0 sudo[254560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:23 compute-0 sudo[254560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:23 compute-0 sudo[254560]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:23 compute-0 sudo[254586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:23 compute-0 sudo[254586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:23 compute-0 sudo[254586]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:54:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:54:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:25 compute-0 ceph-mon[74357]: pgmap v915: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:54:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1584489318' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:54:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1584489318' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:54:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 58 MiB data, 211 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 27 08:54:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:27 compute-0 ceph-mon[74357]: pgmap v916: 305 pgs: 305 active+clean; 58 MiB data, 211 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 27 08:54:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 08:54:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:29 compute-0 ceph-mon[74357]: pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 08:54:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 08:54:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:30.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:30 compute-0 podman[254614]: 2026-01-27 08:54:30.318739003 +0000 UTC m=+0.122769554 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:54:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:31 compute-0 ceph-mon[74357]: pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 08:54:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 08:54:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:32.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:54:33 compute-0 ceph-mon[74357]: pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 08:54:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:34 compute-0 ceph-mon[74357]: pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:54:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:54:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:36 compute-0 ceph-mon[74357]: pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:54:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 4 op/s
Jan 27 08:54:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:38 compute-0 sudo[254644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:38 compute-0 sudo[254644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:38 compute-0 sudo[254644]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:38 compute-0 sudo[254669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:54:38 compute-0 sudo[254669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:38 compute-0 sudo[254669]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:38 compute-0 sudo[254694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:38 compute-0 sudo[254694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:38 compute-0 sudo[254694]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:38 compute-0 sudo[254719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:54:38 compute-0 sudo[254719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:38 compute-0 ceph-mon[74357]: pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 4 op/s
Jan 27 08:54:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:40 compute-0 sudo[254719]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:54:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:54:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:54:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:54:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:54:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:54:40 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 18591fb7-912f-4e3c-86a7-7fefada3c80a does not exist
Jan 27 08:54:40 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 823615f4-c754-40fe-9cd5-a5cf0abb13a9 does not exist
Jan 27 08:54:40 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3fb5a283-f191-4643-b7cb-fdc3c7024cb8 does not exist
Jan 27 08:54:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:54:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:54:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:54:40 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:54:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:54:40 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:54:40 compute-0 sudo[254775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:40 compute-0 sudo[254775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:40 compute-0 sudo[254775]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:40 compute-0 sudo[254800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:54:40 compute-0 sudo[254800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:40 compute-0 sudo[254800]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:40 compute-0 sudo[254825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:40 compute-0 sudo[254825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:40 compute-0 sudo[254825]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:40 compute-0 sudo[254850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:54:40 compute-0 sudo[254850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:41 compute-0 ceph-mon[74357]: pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:54:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:54:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:54:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:54:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:54:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.081329037 +0000 UTC m=+0.042186030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.259153981 +0000 UTC m=+0.220010944 container create f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sammet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 27 08:54:41 compute-0 systemd[1]: Started libpod-conmon-f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0.scope.
Jan 27 08:54:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.349595406 +0000 UTC m=+0.310452349 container init f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sammet, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.358045468 +0000 UTC m=+0.318902441 container start f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.361647546 +0000 UTC m=+0.322504499 container attach f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:54:41 compute-0 competent_sammet[254930]: 167 167
Jan 27 08:54:41 compute-0 systemd[1]: libpod-f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0.scope: Deactivated successfully.
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.364651239 +0000 UTC m=+0.325508182 container died f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sammet, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:54:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-856abf2bd20e7df3a402a74d672c90cfed95f9f76413f541acd8b79e01d30854-merged.mount: Deactivated successfully.
Jan 27 08:54:41 compute-0 podman[254914]: 2026-01-27 08:54:41.409495191 +0000 UTC m=+0.370352144 container remove f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sammet, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:54:41 compute-0 systemd[1]: libpod-conmon-f0ac66343f92503f381e6d5f0f7a5b72bde02dc9e13360cd9b7f1cae6eececd0.scope: Deactivated successfully.
Jan 27 08:54:41 compute-0 podman[254953]: 2026-01-27 08:54:41.624407954 +0000 UTC m=+0.076801011 container create e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:54:41 compute-0 systemd[1]: Started libpod-conmon-e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee.scope.
Jan 27 08:54:41 compute-0 podman[254953]: 2026-01-27 08:54:41.592191529 +0000 UTC m=+0.044584636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:54:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d757dab7705013391c7ee3260a167006ea109d8055c624dbc5fb6173897598d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d757dab7705013391c7ee3260a167006ea109d8055c624dbc5fb6173897598d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d757dab7705013391c7ee3260a167006ea109d8055c624dbc5fb6173897598d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d757dab7705013391c7ee3260a167006ea109d8055c624dbc5fb6173897598d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d757dab7705013391c7ee3260a167006ea109d8055c624dbc5fb6173897598d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:41 compute-0 podman[254953]: 2026-01-27 08:54:41.728700678 +0000 UTC m=+0.181093705 container init e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:54:41 compute-0 podman[254953]: 2026-01-27 08:54:41.739357921 +0000 UTC m=+0.191750948 container start e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:54:41 compute-0 podman[254953]: 2026-01-27 08:54:41.742492307 +0000 UTC m=+0.194885334 container attach e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:54:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:42 compute-0 wonderful_engelbart[254970]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:54:42 compute-0 wonderful_engelbart[254970]: --> relative data size: 1.0
Jan 27 08:54:42 compute-0 wonderful_engelbart[254970]: --> All data devices are unavailable
Jan 27 08:54:42 compute-0 systemd[1]: libpod-e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee.scope: Deactivated successfully.
Jan 27 08:54:42 compute-0 podman[254953]: 2026-01-27 08:54:42.569465252 +0000 UTC m=+1.021858309 container died e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d757dab7705013391c7ee3260a167006ea109d8055c624dbc5fb6173897598d0-merged.mount: Deactivated successfully.
Jan 27 08:54:42 compute-0 podman[254953]: 2026-01-27 08:54:42.620249277 +0000 UTC m=+1.072642314 container remove e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:54:42 compute-0 systemd[1]: libpod-conmon-e1da1125294322d47ab80bb89bd9616f1602f411abf9e6bb60809571c2b248ee.scope: Deactivated successfully.
Jan 27 08:54:42 compute-0 sudo[254850]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:42 compute-0 sudo[254999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:42 compute-0 sudo[254999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:42 compute-0 sudo[254999]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:42.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:42 compute-0 sudo[255024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:54:42 compute-0 sudo[255024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:42 compute-0 sudo[255024]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:42 compute-0 sudo[255049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:42 compute-0 sudo[255049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:42 compute-0 sudo[255049]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:42 compute-0 sudo[255074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:54:42 compute-0 sudo[255074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:43 compute-0 ceph-mon[74357]: pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.345984521 +0000 UTC m=+0.042719434 container create 195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kilby, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:54:43 compute-0 systemd[1]: Started libpod-conmon-195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521.scope.
Jan 27 08:54:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.415003397 +0000 UTC m=+0.111738330 container init 195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kilby, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.422765551 +0000 UTC m=+0.119500454 container start 195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.328070219 +0000 UTC m=+0.024805152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.425528277 +0000 UTC m=+0.122263220 container attach 195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:54:43 compute-0 unruffled_kilby[255154]: 167 167
Jan 27 08:54:43 compute-0 systemd[1]: libpod-195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521.scope: Deactivated successfully.
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.429727361 +0000 UTC m=+0.126462274 container died 195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 27 08:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa819446b427e102d68993ae9a30ccc86dd565f986b839aac5e89dc56d68b53c-merged.mount: Deactivated successfully.
Jan 27 08:54:43 compute-0 podman[255138]: 2026-01-27 08:54:43.456521188 +0000 UTC m=+0.153256101 container remove 195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:54:43 compute-0 systemd[1]: libpod-conmon-195d4d7fea06260d8f98df480a78394d4bb4adaa3d90f16fb21bf41bf1e13521.scope: Deactivated successfully.
Jan 27 08:54:43 compute-0 podman[255176]: 2026-01-27 08:54:43.618187838 +0000 UTC m=+0.043826705 container create e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:54:43 compute-0 systemd[1]: Started libpod-conmon-e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa.scope.
Jan 27 08:54:43 compute-0 podman[255176]: 2026-01-27 08:54:43.597803828 +0000 UTC m=+0.023442715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:54:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d1af85170788e22b2f399bbc3f66b36dc76adda92aae9c1baca34194265ec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d1af85170788e22b2f399bbc3f66b36dc76adda92aae9c1baca34194265ec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d1af85170788e22b2f399bbc3f66b36dc76adda92aae9c1baca34194265ec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d1af85170788e22b2f399bbc3f66b36dc76adda92aae9c1baca34194265ec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:43 compute-0 podman[255176]: 2026-01-27 08:54:43.712978191 +0000 UTC m=+0.138617028 container init e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:54:43 compute-0 podman[255176]: 2026-01-27 08:54:43.726331348 +0000 UTC m=+0.151970185 container start e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:54:43 compute-0 podman[255176]: 2026-01-27 08:54:43.730003979 +0000 UTC m=+0.155642816 container attach e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:54:43 compute-0 sudo[255197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:43 compute-0 sudo[255197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:43 compute-0 sudo[255197]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:43 compute-0 sudo[255224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:43 compute-0 sudo[255224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:43 compute-0 sudo[255224]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:44.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:44 compute-0 adoring_cohen[255194]: {
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:     "0": [
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:         {
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "devices": [
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "/dev/loop3"
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             ],
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "lv_name": "ceph_lv0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "lv_size": "7511998464",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "name": "ceph_lv0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "tags": {
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.cluster_name": "ceph",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.crush_device_class": "",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.encrypted": "0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.osd_id": "0",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.type": "block",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:                 "ceph.vdo": "0"
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             },
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "type": "block",
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:             "vg_name": "ceph_vg0"
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:         }
Jan 27 08:54:44 compute-0 adoring_cohen[255194]:     ]
Jan 27 08:54:44 compute-0 adoring_cohen[255194]: }
Jan 27 08:54:44 compute-0 systemd[1]: libpod-e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa.scope: Deactivated successfully.
Jan 27 08:54:44 compute-0 podman[255176]: 2026-01-27 08:54:44.543833613 +0000 UTC m=+0.969472460 container died e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d1af85170788e22b2f399bbc3f66b36dc76adda92aae9c1baca34194265ec2-merged.mount: Deactivated successfully.
Jan 27 08:54:44 compute-0 podman[255176]: 2026-01-27 08:54:44.616014916 +0000 UTC m=+1.041653753 container remove e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:54:44 compute-0 systemd[1]: libpod-conmon-e042efbdebad0c7124b2c4ba7fc5b91e722e27687bfdcea010e76d073d6a2eaa.scope: Deactivated successfully.
Jan 27 08:54:44 compute-0 podman[255254]: 2026-01-27 08:54:44.643706367 +0000 UTC m=+0.066997301 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:54:44 compute-0 sudo[255074]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:44 compute-0 sudo[255287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:44 compute-0 sudo[255287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:44 compute-0 sudo[255287]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:44 compute-0 sudo[255312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:54:44 compute-0 sudo[255312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:44 compute-0 sudo[255312]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:44.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:44 compute-0 sudo[255337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:44 compute-0 sudo[255337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:44 compute-0 sudo[255337]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:44 compute-0 sudo[255362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:54:44 compute-0 sudo[255362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:54:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:54:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:54:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:54:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:54:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:54:45 compute-0 ceph-mon[74357]: pgmap v925: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.261582378 +0000 UTC m=+0.061776828 container create ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:54:45 compute-0 systemd[1]: Started libpod-conmon-ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61.scope.
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.238525595 +0000 UTC m=+0.038720045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:54:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.355737265 +0000 UTC m=+0.155931735 container init ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.367120537 +0000 UTC m=+0.167314987 container start ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.370903131 +0000 UTC m=+0.171097611 container attach ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 08:54:45 compute-0 youthful_mcnulty[255445]: 167 167
Jan 27 08:54:45 compute-0 systemd[1]: libpod-ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61.scope: Deactivated successfully.
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.373705688 +0000 UTC m=+0.173900168 container died ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5467ec91bfa33e6f11c91a1e0b4f53fcbfee03f04dfe64e0b165baf67b3d76c8-merged.mount: Deactivated successfully.
Jan 27 08:54:45 compute-0 podman[255429]: 2026-01-27 08:54:45.426122878 +0000 UTC m=+0.226317358 container remove ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 08:54:45 compute-0 systemd[1]: libpod-conmon-ad9550f5c121e4c5c43b91432e5d7b9021320c72605df1b158fd6deadf2b4a61.scope: Deactivated successfully.
Jan 27 08:54:45 compute-0 podman[255469]: 2026-01-27 08:54:45.666257414 +0000 UTC m=+0.059205778 container create c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:54:45 compute-0 systemd[1]: Started libpod-conmon-c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1.scope.
Jan 27 08:54:45 compute-0 podman[255469]: 2026-01-27 08:54:45.637222276 +0000 UTC m=+0.030170730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:54:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b29d90e82a57011dd1ec773882ee2926cb89467ccd34b6f145e89cd9b51c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b29d90e82a57011dd1ec773882ee2926cb89467ccd34b6f145e89cd9b51c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b29d90e82a57011dd1ec773882ee2926cb89467ccd34b6f145e89cd9b51c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b29d90e82a57011dd1ec773882ee2926cb89467ccd34b6f145e89cd9b51c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:45 compute-0 podman[255469]: 2026-01-27 08:54:45.76734103 +0000 UTC m=+0.160289494 container init c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:54:45 compute-0 podman[255469]: 2026-01-27 08:54:45.77570058 +0000 UTC m=+0.168648944 container start c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:54:45 compute-0 podman[255469]: 2026-01-27 08:54:45.779079553 +0000 UTC m=+0.172027957 container attach c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:54:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:46.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:46 compute-0 lucid_kare[255486]: {
Jan 27 08:54:46 compute-0 lucid_kare[255486]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:54:46 compute-0 lucid_kare[255486]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:54:46 compute-0 lucid_kare[255486]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:54:46 compute-0 lucid_kare[255486]:         "osd_id": 0,
Jan 27 08:54:46 compute-0 lucid_kare[255486]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:54:46 compute-0 lucid_kare[255486]:         "type": "bluestore"
Jan 27 08:54:46 compute-0 lucid_kare[255486]:     }
Jan 27 08:54:46 compute-0 lucid_kare[255486]: }
Jan 27 08:54:46 compute-0 systemd[1]: libpod-c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1.scope: Deactivated successfully.
Jan 27 08:54:46 compute-0 conmon[255486]: conmon c377a0dcdfa9d71c387a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1.scope/container/memory.events
Jan 27 08:54:46 compute-0 podman[255469]: 2026-01-27 08:54:46.623834226 +0000 UTC m=+1.016782590 container died c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:54:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:46.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b29d90e82a57011dd1ec773882ee2926cb89467ccd34b6f145e89cd9b51c93-merged.mount: Deactivated successfully.
Jan 27 08:54:46 compute-0 podman[255469]: 2026-01-27 08:54:46.999569836 +0000 UTC m=+1.392518200 container remove c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:54:47 compute-0 systemd[1]: libpod-conmon-c377a0dcdfa9d71c387a5bcbf8edc6d1613813d08b560bb70b1b5af428f1aff1.scope: Deactivated successfully.
Jan 27 08:54:47 compute-0 sudo[255362]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:54:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:54:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:54:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:54:47 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 793e03c0-06e8-4887-b30d-9487921b9b11 does not exist
Jan 27 08:54:47 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5b3bba9a-e817-4e63-a305-d93ca15e4457 does not exist
Jan 27 08:54:47 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0b62e02d-6c24-4e90-bc50-e27bfd2a439b does not exist
Jan 27 08:54:47 compute-0 ceph-mon[74357]: pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:54:47 compute-0 sudo[255521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:54:47 compute-0 sudo[255521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:47 compute-0 sudo[255521]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:47 compute-0 sudo[255546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:54:47 compute-0 sudo[255546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:54:47 compute-0 sudo[255546]: pam_unix(sudo:session): session closed for user root
Jan 27 08:54:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:54:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:48.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:49 compute-0 ceph-mon[74357]: pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:50.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:50.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:51 compute-0 ceph-mon[74357]: pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:52.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:52.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:53 compute-0 ceph-mon[74357]: pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:54.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:54:54.237 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:54:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:54:54.238 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:54:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:54:54.238 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:54:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:54:55.336 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:54:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:54:55.337 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:54:55 compute-0 ceph-mon[74357]: pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:54:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 170 B/s wr, 6 op/s
Jan 27 08:54:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:56.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:54:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:54:57 compute-0 ceph-mon[74357]: pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 170 B/s wr, 6 op/s
Jan 27 08:54:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 08:54:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:54:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:54:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:54:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:54:58.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:54:58 compute-0 nova_compute[247671]: 2026-01-27 08:54:58.888 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:58 compute-0 nova_compute[247671]: 2026-01-27 08:54:58.889 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:58 compute-0 nova_compute[247671]: 2026-01-27 08:54:58.889 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:58 compute-0 nova_compute[247671]: 2026-01-27 08:54:58.889 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:58 compute-0 nova_compute[247671]: 2026-01-27 08:54:58.889 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:58 compute-0 nova_compute[247671]: 2026-01-27 08:54:58.889 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:54:59 compute-0 nova_compute[247671]: 2026-01-27 08:54:59.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:54:59 compute-0 ceph-mon[74357]: pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 08:54:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/972032112' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:54:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/972032112' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:55:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 65 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 862 KiB/s wr, 19 op/s
Jan 27 08:55:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:00 compute-0 nova_compute[247671]: 2026-01-27 08:55:00.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/436997445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 08:55:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:00.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 08:55:01 compute-0 podman[255578]: 2026-01-27 08:55:01.321766723 +0000 UTC m=+0.126462744 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:55:01 compute-0 anacron[29966]: Job `cron.weekly' started
Jan 27 08:55:01 compute-0 anacron[29966]: Job `cron.weekly' terminated
Jan 27 08:55:01 compute-0 nova_compute[247671]: 2026-01-27 08:55:01.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:01 compute-0 ceph-mon[74357]: pgmap v933: 305 pgs: 305 active+clean; 65 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 862 KiB/s wr, 19 op/s
Jan 27 08:55:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2753573735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:55:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.438 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.439 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.463 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.463 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:55:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/544966619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 08:55:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:02.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 08:55:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:55:02 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985513945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:02 compute-0 nova_compute[247671]: 2026-01-27 08:55:02.979 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.200 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.201 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5201MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.201 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.202 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.472 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance defc9786-ca44-428e-9bc9-fa6596e75ba7 has allocations against this compute host but is not found in the database.
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.472 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.473 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.586 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing inventories for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 08:55:03 compute-0 ceph-mon[74357]: pgmap v934: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:55:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/985513945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1144768757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.661 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating ProviderTree inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.661 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.690 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing aggregate associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.722 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing trait associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 08:55:03 compute-0 nova_compute[247671]: 2026-01-27 08:55:03.765 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:55:03 compute-0 sudo[255632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:03 compute-0 sudo[255632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:03 compute-0 sudo[255632]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:03 compute-0 sudo[255676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:03 compute-0 sudo[255676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:03 compute-0 sudo[255676]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:55:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:55:04 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699335064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:04 compute-0 nova_compute[247671]: 2026-01-27 08:55:04.230 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:55:04 compute-0 nova_compute[247671]: 2026-01-27 08:55:04.236 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:55:04 compute-0 nova_compute[247671]: 2026-01-27 08:55:04.259 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:55:04 compute-0 nova_compute[247671]: 2026-01-27 08:55:04.262 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:55:04 compute-0 nova_compute[247671]: 2026-01-27 08:55:04.262 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:55:04 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:04.339 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:55:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2699335064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:55:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:04.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:05 compute-0 ceph-mon[74357]: pgmap v935: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:55:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:55:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:06.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:07 compute-0 ceph-mon[74357]: pgmap v936: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 08:55:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2520415888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:55:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2520415888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:55:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 400 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:55:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:08.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:08.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:09 compute-0 ceph-mon[74357]: pgmap v937: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 400 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:55:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 64 MiB data, 205 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:55:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:10.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:11 compute-0 ceph-mon[74357]: pgmap v938: 305 pgs: 305 active+clean; 64 MiB data, 205 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:55:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 954 KiB/s wr, 37 op/s
Jan 27 08:55:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:12.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:12.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:13 compute-0 ceph-mon[74357]: pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 954 KiB/s wr, 37 op/s
Jan 27 08:55:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:55:15
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:55:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:55:15 compute-0 podman[255708]: 2026-01-27 08:55:15.245605095 +0000 UTC m=+0.056546614 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 08:55:15 compute-0 ceph-mon[74357]: pgmap v940: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:16.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:16.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:17 compute-0 ceph-mon[74357]: pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.832773) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504117832831, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 820, "num_deletes": 251, "total_data_size": 1179116, "memory_usage": 1197920, "flush_reason": "Manual Compaction"}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504117841874, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1165856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20943, "largest_seqno": 21762, "table_properties": {"data_size": 1161750, "index_size": 1822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9219, "raw_average_key_size": 19, "raw_value_size": 1153474, "raw_average_value_size": 2438, "num_data_blocks": 82, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504048, "oldest_key_time": 1769504048, "file_creation_time": 1769504117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9165 microseconds, and 4399 cpu microseconds.
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.841946) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1165856 bytes OK
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.841965) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.844349) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.844365) EVENT_LOG_v1 {"time_micros": 1769504117844359, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.844384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1175141, prev total WAL file size 1175141, number of live WAL files 2.
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.845008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1138KB)], [47(8301KB)]
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504117845078, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9666896, "oldest_snapshot_seqno": -1}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4667 keys, 7630457 bytes, temperature: kUnknown
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504117878615, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7630457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7599909, "index_size": 17766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 116733, "raw_average_key_size": 25, "raw_value_size": 7515944, "raw_average_value_size": 1610, "num_data_blocks": 730, "num_entries": 4667, "num_filter_entries": 4667, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.879147) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7630457 bytes
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.880627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 287.7 rd, 227.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(14.8) write-amplify(6.5) OK, records in: 5182, records dropped: 515 output_compression: NoCompression
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.880649) EVENT_LOG_v1 {"time_micros": 1769504117880638, "job": 24, "event": "compaction_finished", "compaction_time_micros": 33605, "compaction_time_cpu_micros": 18732, "output_level": 6, "num_output_files": 1, "total_output_size": 7630457, "num_input_records": 5182, "num_output_records": 4667, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504117881207, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504117883131, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.844850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.883181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.883186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.883188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.883190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:17 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:17.883192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:19 compute-0 ceph-mon[74357]: pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:20.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:21 compute-0 ceph-mon[74357]: pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:22.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:23 compute-0 ceph-mon[74357]: pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:24 compute-0 sudo[255732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:24 compute-0 sudo[255732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:24 compute-0 sudo[255732]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:24 compute-0 sudo[255757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:24 compute-0 sudo[255757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:24 compute-0 sudo[255757]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:55:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:55:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:25 compute-0 ceph-mon[74357]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:26.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:27 compute-0 ceph-mon[74357]: pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.762648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504127762715, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 257, "total_data_size": 173473, "memory_usage": 181288, "flush_reason": "Manual Compaction"}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504127773945, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 172506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21763, "largest_seqno": 22096, "table_properties": {"data_size": 170388, "index_size": 282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4877, "raw_average_key_size": 16, "raw_value_size": 166234, "raw_average_value_size": 571, "num_data_blocks": 13, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504118, "oldest_key_time": 1769504118, "file_creation_time": 1769504127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 11336 microseconds, and 1868 cpu microseconds.
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.773995) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 172506 bytes OK
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.774017) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.778132) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.778162) EVENT_LOG_v1 {"time_micros": 1769504127778153, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.778188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 171166, prev total WAL file size 171166, number of live WAL files 2.
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.778803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(168KB)], [50(7451KB)]
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504127778916, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 7802963, "oldest_snapshot_seqno": -1}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4436 keys, 7681601 bytes, temperature: kUnknown
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504127898427, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7681601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7651950, "index_size": 17445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 113119, "raw_average_key_size": 25, "raw_value_size": 7571406, "raw_average_value_size": 1706, "num_data_blocks": 712, "num_entries": 4436, "num_filter_entries": 4436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.898655) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7681601 bytes
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.901344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.4 rd, 64.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(89.8) write-amplify(44.5) OK, records in: 4958, records dropped: 522 output_compression: NoCompression
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.901366) EVENT_LOG_v1 {"time_micros": 1769504127901355, "job": 26, "event": "compaction_finished", "compaction_time_micros": 119346, "compaction_time_cpu_micros": 34752, "output_level": 6, "num_output_files": 1, "total_output_size": 7681601, "num_input_records": 4958, "num_output_records": 4436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504127901513, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504127902859, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.778627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.902908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.902912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.902914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.902916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:27 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:55:27.902918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:55:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:29 compute-0 ceph-mon[74357]: pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:30.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:31 compute-0 ceph-mon[74357]: pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:32.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:32 compute-0 podman[255786]: 2026-01-27 08:55:32.294612442 +0000 UTC m=+0.111675699 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:55:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:33 compute-0 ceph-mon[74357]: pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:34.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:36 compute-0 ceph-mon[74357]: pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:37 compute-0 ceph-mon[74357]: pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:38.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:39 compute-0 ceph-mon[74357]: pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:39 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:39.627 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:55:39 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:39.628 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:55:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:40.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:40.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:41 compute-0 ceph-mon[74357]: pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:41 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:41.630 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:55:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:42.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:43 compute-0 ceph-mon[74357]: pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:44 compute-0 sudo[255819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:44 compute-0 sudo[255819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:44.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:44 compute-0 sudo[255819]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:44 compute-0 sudo[255844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:44 compute-0 sudo[255844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:44 compute-0 sudo[255844]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:55:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:55:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:55:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:55:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:55:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:55:45 compute-0 ceph-mon[74357]: pgmap v955: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:46 compute-0 podman[255870]: 2026-01-27 08:55:46.252570591 +0000 UTC m=+0.065014771 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:55:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:46.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:47 compute-0 sudo[255890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:47 compute-0 sudo[255890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:47 compute-0 sudo[255890]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:47 compute-0 sudo[255915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:55:47 compute-0 sudo[255915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:47 compute-0 sudo[255915]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:47 compute-0 sudo[255940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:47 compute-0 sudo[255940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:47 compute-0 sudo[255940]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:47 compute-0 sudo[255965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:55:47 compute-0 sudo[255965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:47 compute-0 ceph-mon[74357]: pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:48.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:48 compute-0 sudo[255965]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:55:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:55:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:55:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:55:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:55:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:55:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c1870182-c2e8-4cd3-8817-09c03f625187 does not exist
Jan 27 08:55:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 740fc1fc-7e59-48fe-bd73-35e9c6824ab9 does not exist
Jan 27 08:55:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f162c287-98fd-4980-9abc-38ddaa7bdab2 does not exist
Jan 27 08:55:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:55:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:55:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:55:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:55:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:55:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:55:48 compute-0 sudo[256021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:48 compute-0 sudo[256021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:48 compute-0 sudo[256021]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:48 compute-0 sudo[256046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:55:48 compute-0 sudo[256046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:48 compute-0 sudo[256046]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:48 compute-0 sudo[256071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:48 compute-0 sudo[256071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:48 compute-0 sudo[256071]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:48.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:48 compute-0 sudo[256096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:55:48 compute-0 sudo[256096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:49 compute-0 podman[256158]: 2026-01-27 08:55:49.198010655 +0000 UTC m=+0.025702820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:55:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:52 compute-0 podman[256158]: 2026-01-27 08:55:52.678746854 +0000 UTC m=+3.506439019 container create e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:55:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:55:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:55:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:55:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:55:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:55:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:55:52 compute-0 systemd[1]: Started libpod-conmon-e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c.scope.
Jan 27 08:55:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:55:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:52 compute-0 podman[256158]: 2026-01-27 08:55:52.783453374 +0000 UTC m=+3.611145599 container init e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 08:55:52 compute-0 podman[256158]: 2026-01-27 08:55:52.794427153 +0000 UTC m=+3.622119308 container start e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:55:52 compute-0 priceless_nash[256177]: 167 167
Jan 27 08:55:52 compute-0 systemd[1]: libpod-e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c.scope: Deactivated successfully.
Jan 27 08:55:52 compute-0 podman[256158]: 2026-01-27 08:55:52.81084528 +0000 UTC m=+3.638537655 container attach e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:55:52 compute-0 podman[256158]: 2026-01-27 08:55:52.812440193 +0000 UTC m=+3.640132368 container died e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:55:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:55:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:52.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f07a6202ea07ad8530ed96d672e9fca141899cb19fea494fddc58a52e4430a-merged.mount: Deactivated successfully.
Jan 27 08:55:53 compute-0 podman[256158]: 2026-01-27 08:55:53.171471518 +0000 UTC m=+3.999163673 container remove e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:55:53 compute-0 systemd[1]: libpod-conmon-e2eba745c5c91c45a713086c0a963231b19f1ee9e07a68ec7c763c19ec17ef9c.scope: Deactivated successfully.
Jan 27 08:55:53 compute-0 podman[256203]: 2026-01-27 08:55:53.354804189 +0000 UTC m=+0.069855593 container create 2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 08:55:53 compute-0 podman[256203]: 2026-01-27 08:55:53.307867741 +0000 UTC m=+0.022919165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:55:53 compute-0 systemd[1]: Started libpod-conmon-2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d.scope.
Jan 27 08:55:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4dee978609750c62fc6a8fce9959c7d0f6d2cee2ee9ab6075448a0541dd366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4dee978609750c62fc6a8fce9959c7d0f6d2cee2ee9ab6075448a0541dd366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4dee978609750c62fc6a8fce9959c7d0f6d2cee2ee9ab6075448a0541dd366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4dee978609750c62fc6a8fce9959c7d0f6d2cee2ee9ab6075448a0541dd366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4dee978609750c62fc6a8fce9959c7d0f6d2cee2ee9ab6075448a0541dd366/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:53 compute-0 podman[256203]: 2026-01-27 08:55:53.538622783 +0000 UTC m=+0.253674207 container init 2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 08:55:53 compute-0 podman[256203]: 2026-01-27 08:55:53.545189781 +0000 UTC m=+0.260241185 container start 2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 08:55:53 compute-0 podman[256203]: 2026-01-27 08:55:53.559917842 +0000 UTC m=+0.274969266 container attach 2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:55:53 compute-0 ceph-mon[74357]: pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:53 compute-0 ceph-mon[74357]: pgmap v958: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:53 compute-0 ceph-mon[74357]: pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:54.238 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:55:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:55:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:55:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:55:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:54.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:54 compute-0 adoring_herschel[256219]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:55:54 compute-0 adoring_herschel[256219]: --> relative data size: 1.0
Jan 27 08:55:54 compute-0 adoring_herschel[256219]: --> All data devices are unavailable
Jan 27 08:55:54 compute-0 systemd[1]: libpod-2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d.scope: Deactivated successfully.
Jan 27 08:55:54 compute-0 podman[256203]: 2026-01-27 08:55:54.355671715 +0000 UTC m=+1.070723109 container died 2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4dee978609750c62fc6a8fce9959c7d0f6d2cee2ee9ab6075448a0541dd366-merged.mount: Deactivated successfully.
Jan 27 08:55:54 compute-0 podman[256203]: 2026-01-27 08:55:54.411361531 +0000 UTC m=+1.126412935 container remove 2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:55:54 compute-0 systemd[1]: libpod-conmon-2b5b5424b47ce1d3ff2fc964b2f992efab7fbd68b15b1b8450015274fb55cd7d.scope: Deactivated successfully.
Jan 27 08:55:54 compute-0 sudo[256096]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:54 compute-0 sudo[256247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:54 compute-0 sudo[256247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:54 compute-0 sudo[256247]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:54 compute-0 sudo[256272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:55:54 compute-0 sudo[256272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:54 compute-0 sudo[256272]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:54 compute-0 sudo[256297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:54 compute-0 sudo[256297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:54 compute-0 sudo[256297]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:54 compute-0 sudo[256322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:55:54 compute-0 sudo[256322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:54.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:54 compute-0 podman[256385]: 2026-01-27 08:55:54.960430809 +0000 UTC m=+0.037028249 container create 01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:55:54 compute-0 systemd[1]: Started libpod-conmon-01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01.scope.
Jan 27 08:55:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:55:55 compute-0 podman[256385]: 2026-01-27 08:55:55.037362974 +0000 UTC m=+0.113960424 container init 01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:55:55 compute-0 podman[256385]: 2026-01-27 08:55:54.943722254 +0000 UTC m=+0.020319744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:55:55 compute-0 podman[256385]: 2026-01-27 08:55:55.043955923 +0000 UTC m=+0.120553353 container start 01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:55:55 compute-0 relaxed_lehmann[256401]: 167 167
Jan 27 08:55:55 compute-0 systemd[1]: libpod-01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01.scope: Deactivated successfully.
Jan 27 08:55:55 compute-0 podman[256385]: 2026-01-27 08:55:55.049381991 +0000 UTC m=+0.125979431 container attach 01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:55:55 compute-0 podman[256385]: 2026-01-27 08:55:55.04973525 +0000 UTC m=+0.126332690 container died 01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 08:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-be2c4076e375727f05071bb330fbfa8d0aee376d93ef89b7057fa3049cf4a8a1-merged.mount: Deactivated successfully.
Jan 27 08:55:55 compute-0 podman[256385]: 2026-01-27 08:55:55.093031409 +0000 UTC m=+0.169628839 container remove 01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:55:55 compute-0 systemd[1]: libpod-conmon-01e457fb0c5015d24a02ee5ba8885e844a4a0b6702dd4b03362b7b2187d8da01.scope: Deactivated successfully.
Jan 27 08:55:55 compute-0 ceph-mon[74357]: pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:55 compute-0 podman[256425]: 2026-01-27 08:55:55.260344314 +0000 UTC m=+0.038324294 container create 5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:55:55 compute-0 systemd[1]: Started libpod-conmon-5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f.scope.
Jan 27 08:55:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aa1538189d7c1a01e7cc0c3a88a380f3efa367f7f66f821c0fecdb577ab7a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aa1538189d7c1a01e7cc0c3a88a380f3efa367f7f66f821c0fecdb577ab7a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aa1538189d7c1a01e7cc0c3a88a380f3efa367f7f66f821c0fecdb577ab7a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28aa1538189d7c1a01e7cc0c3a88a380f3efa367f7f66f821c0fecdb577ab7a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:55 compute-0 podman[256425]: 2026-01-27 08:55:55.32410968 +0000 UTC m=+0.102089660 container init 5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:55:55 compute-0 podman[256425]: 2026-01-27 08:55:55.332529399 +0000 UTC m=+0.110509379 container start 5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:55:55 compute-0 podman[256425]: 2026-01-27 08:55:55.335630723 +0000 UTC m=+0.113610703 container attach 5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:55:55 compute-0 podman[256425]: 2026-01-27 08:55:55.245284444 +0000 UTC m=+0.023264454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:55:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]: {
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:     "0": [
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:         {
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "devices": [
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "/dev/loop3"
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             ],
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "lv_name": "ceph_lv0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "lv_size": "7511998464",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "name": "ceph_lv0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "tags": {
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.cluster_name": "ceph",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.crush_device_class": "",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.encrypted": "0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.osd_id": "0",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.type": "block",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:                 "ceph.vdo": "0"
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             },
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "type": "block",
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:             "vg_name": "ceph_vg0"
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:         }
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]:     ]
Jan 27 08:55:56 compute-0 zealous_mclaren[256441]: }
Jan 27 08:55:56 compute-0 systemd[1]: libpod-5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f.scope: Deactivated successfully.
Jan 27 08:55:56 compute-0 podman[256425]: 2026-01-27 08:55:56.089181437 +0000 UTC m=+0.867161417 container died 5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-28aa1538189d7c1a01e7cc0c3a88a380f3efa367f7f66f821c0fecdb577ab7a5-merged.mount: Deactivated successfully.
Jan 27 08:55:56 compute-0 podman[256425]: 2026-01-27 08:55:56.139722553 +0000 UTC m=+0.917702553 container remove 5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:55:56 compute-0 systemd[1]: libpod-conmon-5521ceaa95ae5a69cce09a0281a95f7eb42c10a5c2129d28d43561beddb7a81f.scope: Deactivated successfully.
Jan 27 08:55:56 compute-0 sudo[256322]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:56 compute-0 sudo[256465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:56 compute-0 sudo[256465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:56 compute-0 sudo[256465]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:56 compute-0 sudo[256490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:55:56 compute-0 sudo[256490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:56 compute-0 sudo[256490]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:56 compute-0 sudo[256515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:56 compute-0 sudo[256515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:56 compute-0 sudo[256515]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:56 compute-0 sudo[256540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:55:56 compute-0 sudo[256540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.735081432 +0000 UTC m=+0.073502253 container create 7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_montalcini, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.682243293 +0000 UTC m=+0.020664134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:55:56 compute-0 systemd[1]: Started libpod-conmon-7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d.scope.
Jan 27 08:55:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.86100673 +0000 UTC m=+0.199427581 container init 7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.86765323 +0000 UTC m=+0.206074051 container start 7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:55:56 compute-0 kind_montalcini[256617]: 167 167
Jan 27 08:55:56 compute-0 systemd[1]: libpod-7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d.scope: Deactivated successfully.
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.878609299 +0000 UTC m=+0.217030120 container attach 7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.879163564 +0000 UTC m=+0.217584385 container died 7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:55:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f720ce369be32796d6e6fdb6519192f64c8e8a203c3f8a9988bc8d5404e3fb0a-merged.mount: Deactivated successfully.
Jan 27 08:55:56 compute-0 podman[256601]: 2026-01-27 08:55:56.984856241 +0000 UTC m=+0.323277102 container remove 7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:55:57 compute-0 systemd[1]: libpod-conmon-7062cedde6591fece870e042b5fbf92e11c20ce2f8d01c5304ba4e9c5375829d.scope: Deactivated successfully.
Jan 27 08:55:57 compute-0 podman[256641]: 2026-01-27 08:55:57.172578572 +0000 UTC m=+0.041219424 container create 25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kowalevski, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:55:57 compute-0 systemd[1]: Started libpod-conmon-25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7.scope.
Jan 27 08:55:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766a089225aaebbc6b32183749b25b4f0cf0f6457ad0af2a2d4d86b81d4ec61c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766a089225aaebbc6b32183749b25b4f0cf0f6457ad0af2a2d4d86b81d4ec61c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766a089225aaebbc6b32183749b25b4f0cf0f6457ad0af2a2d4d86b81d4ec61c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766a089225aaebbc6b32183749b25b4f0cf0f6457ad0af2a2d4d86b81d4ec61c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:55:57 compute-0 podman[256641]: 2026-01-27 08:55:57.156163555 +0000 UTC m=+0.024804407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:55:57 compute-0 ceph-mon[74357]: pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:57 compute-0 podman[256641]: 2026-01-27 08:55:57.254546993 +0000 UTC m=+0.123187865 container init 25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kowalevski, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:55:57 compute-0 podman[256641]: 2026-01-27 08:55:57.26767084 +0000 UTC m=+0.136311732 container start 25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 08:55:57 compute-0 podman[256641]: 2026-01-27 08:55:57.272445801 +0000 UTC m=+0.141086683 container attach 25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kowalevski, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:55:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:55:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:58 compute-0 great_kowalevski[256657]: {
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:         "osd_id": 0,
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:         "type": "bluestore"
Jan 27 08:55:58 compute-0 great_kowalevski[256657]:     }
Jan 27 08:55:58 compute-0 great_kowalevski[256657]: }
Jan 27 08:55:58 compute-0 systemd[1]: libpod-25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7.scope: Deactivated successfully.
Jan 27 08:55:58 compute-0 podman[256641]: 2026-01-27 08:55:58.101176711 +0000 UTC m=+0.969817573 container died 25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kowalevski, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-766a089225aaebbc6b32183749b25b4f0cf0f6457ad0af2a2d4d86b81d4ec61c-merged.mount: Deactivated successfully.
Jan 27 08:55:58 compute-0 podman[256641]: 2026-01-27 08:55:58.155806489 +0000 UTC m=+1.024447351 container remove 25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:55:58 compute-0 systemd[1]: libpod-conmon-25760211003b60d32b5beace7fef83086cbbbc060de276e1a6591f26af821ea7.scope: Deactivated successfully.
Jan 27 08:55:58 compute-0 sudo[256540]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:55:58 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:55:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:55:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:55:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:58 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:55:58 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8d676859-13cb-403f-9a59-e96fede1157a does not exist
Jan 27 08:55:58 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev dc6e20bd-80fb-4b25-8a84-f820f89d2f22 does not exist
Jan 27 08:55:58 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a8bc4a62-fdac-467b-b831-feb57658a1c6 does not exist
Jan 27 08:55:58 compute-0 sudo[256691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:55:58 compute-0 sudo[256691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:58 compute-0 sudo[256691]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:58 compute-0 sudo[256716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:55:58 compute-0 sudo[256716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:55:58 compute-0 sudo[256716]: pam_unix(sudo:session): session closed for user root
Jan 27 08:55:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:55:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:55:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:55:58.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:55:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 08:55:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3454354916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:55:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 08:55:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3454354916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:55:59 compute-0 nova_compute[247671]: 2026-01-27 08:55:59.247 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:59 compute-0 nova_compute[247671]: 2026-01-27 08:55:59.248 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:59 compute-0 nova_compute[247671]: 2026-01-27 08:55:59.249 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:55:59 compute-0 nova_compute[247671]: 2026-01-27 08:55:59.249 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:55:59 compute-0 ceph-mon[74357]: pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:55:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:55:59 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:55:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3454354916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:55:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3454354916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:55:59 compute-0 nova_compute[247671]: 2026-01-27 08:55:59.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:56:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:00 compute-0 nova_compute[247671]: 2026-01-27 08:56:00.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:56:00 compute-0 nova_compute[247671]: 2026-01-27 08:56:00.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:56:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:00.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:01 compute-0 ceph-mon[74357]: pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3800135837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:01 compute-0 nova_compute[247671]: 2026-01-27 08:56:01.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:56:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:02.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4146853003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:02 compute-0 nova_compute[247671]: 2026-01-27 08:56:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:56:02 compute-0 nova_compute[247671]: 2026-01-27 08:56:02.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:56:02 compute-0 nova_compute[247671]: 2026-01-27 08:56:02.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:56:02 compute-0 nova_compute[247671]: 2026-01-27 08:56:02.445 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:56:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:03 compute-0 podman[256743]: 2026-01-27 08:56:03.272643406 +0000 UTC m=+0.083135344 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 27 08:56:03 compute-0 ceph-mon[74357]: pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/602545388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:04 compute-0 nova_compute[247671]: 2026-01-27 08:56:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:56:04 compute-0 sudo[256771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:04 compute-0 sudo[256771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:04 compute-0 sudo[256771]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:04 compute-0 nova_compute[247671]: 2026-01-27 08:56:04.452 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:56:04 compute-0 nova_compute[247671]: 2026-01-27 08:56:04.452 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:56:04 compute-0 nova_compute[247671]: 2026-01-27 08:56:04.452 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:56:04 compute-0 nova_compute[247671]: 2026-01-27 08:56:04.452 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:56:04 compute-0 nova_compute[247671]: 2026-01-27 08:56:04.453 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:56:04 compute-0 sudo[256796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:04 compute-0 sudo[256796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:04 compute-0 sudo[256796]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1796790539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:56:05 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1537042762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.037 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.175 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.176 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.176 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.177 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.229 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.230 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.249 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:56:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:56:05 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344567832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.725 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.732 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:56:05 compute-0 ceph-mon[74357]: pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1537042762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1344567832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.760 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.761 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:56:05 compute-0 nova_compute[247671]: 2026-01-27 08:56:05.762 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:56:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:06.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:56:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:06.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:56:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:07 compute-0 ceph-mon[74357]: pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:08.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:08.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:09 compute-0 ceph-mon[74357]: pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:10.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:11 compute-0 ceph-mon[74357]: pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:13 compute-0 ceph-mon[74357]: pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:56:15
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr']
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:56:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:56:15 compute-0 ceph-mon[74357]: pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 170 B/s wr, 1 op/s
Jan 27 08:56:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:16.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:17 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 27 08:56:17 compute-0 podman[256871]: 2026-01-27 08:56:17.271877402 +0000 UTC m=+0.086682691 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 08:56:17 compute-0 ceph-mon[74357]: pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 170 B/s wr, 1 op/s
Jan 27 08:56:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:56:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:56:19 compute-0 ceph-mon[74357]: pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:20.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:21 compute-0 ceph-mon[74357]: pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 08:56:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:22.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:23 compute-0 ceph-mon[74357]: pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 08:56:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:56:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:56:24 compute-0 sudo[256896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:24 compute-0 sudo[256896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:24 compute-0 sudo[256896]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:24 compute-0 sudo[256921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:24 compute-0 sudo[256921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:24 compute-0 sudo[256921]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:24.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:25 compute-0 ceph-mon[74357]: pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 08:56:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 08:56:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:26.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:27 compute-0 ceph-mon[74357]: pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 08:56:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.9 KiB/s rd, 852 B/s wr, 7 op/s
Jan 27 08:56:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:30 compute-0 ceph-mon[74357]: pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.9 KiB/s rd, 852 B/s wr, 7 op/s
Jan 27 08:56:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:31 compute-0 ceph-mon[74357]: pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:33 compute-0 ceph-mon[74357]: pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:56:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3598429122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:56:33 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3598429122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:56:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:34 compute-0 podman[256951]: 2026-01-27 08:56:34.260743307 +0000 UTC m=+0.072070703 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:56:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:35 compute-0 ceph-mon[74357]: pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:56:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:37 compute-0 ceph-mon[74357]: pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 9 op/s
Jan 27 08:56:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3495604466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:56:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3495604466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:56:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 20 op/s
Jan 27 08:56:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:56:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:56:39 compute-0 ceph-mon[74357]: pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 20 op/s
Jan 27 08:56:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 20 op/s
Jan 27 08:56:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:40.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:41 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:56:41.669 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:56:41 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:56:41.671 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:56:41 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:56:41.672 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:56:42 compute-0 ceph-mon[74357]: pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 20 op/s
Jan 27 08:56:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 27 08:56:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:42.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:56:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:56:43 compute-0 ceph-mon[74357]: pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 27 08:56:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 27 08:56:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:44 compute-0 sudo[256982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:44 compute-0 sudo[256982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:44 compute-0 sudo[256982]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:44 compute-0 sudo[257007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:44 compute-0 sudo[257007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:44 compute-0 sudo[257007]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:44.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:56:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:56:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:56:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:56:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:56:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:56:45 compute-0 ceph-mon[74357]: pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 27 08:56:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 27 08:56:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:46.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:47 compute-0 ceph-mon[74357]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 27 08:56:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 511 B/s wr, 11 op/s
Jan 27 08:56:48 compute-0 podman[257034]: 2026-01-27 08:56:48.242779105 +0000 UTC m=+0.061081204 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:56:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:48.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:48.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:49 compute-0 ceph-mon[74357]: pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 511 B/s wr, 11 op/s
Jan 27 08:56:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 27 08:56:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:50.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:50.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:51 compute-0 ceph-mon[74357]: pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 27 08:56:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 27 08:56:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:52.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:53 compute-0 ceph-mon[74357]: pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 27 08:56:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:56:54.239 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:56:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:56:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:56:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:56:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:56:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:54.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:54.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:55 compute-0 ceph-mon[74357]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:56.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:57 compute-0 ceph-mon[74357]: pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:56:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:56:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:56:58.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:56:58 compute-0 sudo[257057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:58 compute-0 sudo[257057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:58 compute-0 sudo[257057]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:58 compute-0 sudo[257082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:56:58 compute-0 sudo[257082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:58 compute-0 sudo[257082]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:58 compute-0 sudo[257107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:56:58 compute-0 sudo[257107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:58 compute-0 sudo[257107]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:56:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:56:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:56:58.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:56:59 compute-0 sudo[257132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:56:59 compute-0 sudo[257132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:56:59 compute-0 sudo[257132]: pam_unix(sudo:session): session closed for user root
Jan 27 08:56:59 compute-0 ceph-mon[74357]: pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:56:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3860663216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:56:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3860663216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:57:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 08:57:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 08:57:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:57:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:00.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:57:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:00 compute-0 nova_compute[247671]: 2026-01-27 08:57:00.763 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:00 compute-0 nova_compute[247671]: 2026-01-27 08:57:00.763 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:00.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:01 compute-0 nova_compute[247671]: 2026-01-27 08:57:01.134 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:01 compute-0 nova_compute[247671]: 2026-01-27 08:57:01.134 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:01 compute-0 nova_compute[247671]: 2026-01-27 08:57:01.135 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:01 compute-0 nova_compute[247671]: 2026-01-27 08:57:01.135 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:57:01 compute-0 nova_compute[247671]: 2026-01-27 08:57:01.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:01 compute-0 ceph-mon[74357]: pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:57:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:57:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:57:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:57:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:57:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a46c7936-ef15-434f-85fa-9e3218391998 does not exist
Jan 27 08:57:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev ebfb1c57-18bf-4163-9ee7-807ce909f572 does not exist
Jan 27 08:57:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5e2748e7-64f3-47d9-a277-37e9348f93ff does not exist
Jan 27 08:57:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:57:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:57:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:57:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:57:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:57:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:57:01 compute-0 sudo[257191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:01 compute-0 sudo[257191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:01 compute-0 sudo[257191]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:01 compute-0 sudo[257216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:57:01 compute-0 sudo[257216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:01 compute-0 sudo[257216]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:01 compute-0 sudo[257241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:01 compute-0 sudo[257241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:01 compute-0 sudo[257241]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:01 compute-0 sudo[257266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:57:01 compute-0 sudo[257266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.422732731 +0000 UTC m=+0.072054213 container create 11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:57:02 compute-0 nova_compute[247671]: 2026-01-27 08:57:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:02 compute-0 nova_compute[247671]: 2026-01-27 08:57:02.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:02 compute-0 systemd[1]: Started libpod-conmon-11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639.scope.
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.397326149 +0000 UTC m=+0.046647671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:57:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.524076739 +0000 UTC m=+0.173398231 container init 11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.532425467 +0000 UTC m=+0.181746969 container start 11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.536239 +0000 UTC m=+0.185560492 container attach 11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:57:02 compute-0 mystifying_liskov[257347]: 167 167
Jan 27 08:57:02 compute-0 systemd[1]: libpod-11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639.scope: Deactivated successfully.
Jan 27 08:57:02 compute-0 conmon[257347]: conmon 11d56b0b7874dee1ef71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639.scope/container/memory.events
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.542905052 +0000 UTC m=+0.192226564 container died 11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d46c1520e529f6a06a311e2063940092c1198301a485574a23338f9e55000b-merged.mount: Deactivated successfully.
Jan 27 08:57:02 compute-0 podman[257331]: 2026-01-27 08:57:02.585926853 +0000 UTC m=+0.235248325 container remove 11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:57:02 compute-0 systemd[1]: libpod-conmon-11d56b0b7874dee1ef714e72a1eb7e0af0b8b4e2763c9a3b89f9ead2cea25639.scope: Deactivated successfully.
Jan 27 08:57:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:57:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:57:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:57:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:57:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:57:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:02 compute-0 podman[257372]: 2026-01-27 08:57:02.760743933 +0000 UTC m=+0.053094167 container create 87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:57:02 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:02 compute-0 systemd[1]: Started libpod-conmon-87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50.scope.
Jan 27 08:57:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b2f54048bffe91eedae0e40ec9998eeda51a664178da239ee62ac47e5d6d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:02 compute-0 podman[257372]: 2026-01-27 08:57:02.735393973 +0000 UTC m=+0.027744307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b2f54048bffe91eedae0e40ec9998eeda51a664178da239ee62ac47e5d6d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b2f54048bffe91eedae0e40ec9998eeda51a664178da239ee62ac47e5d6d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b2f54048bffe91eedae0e40ec9998eeda51a664178da239ee62ac47e5d6d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b2f54048bffe91eedae0e40ec9998eeda51a664178da239ee62ac47e5d6d05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:02 compute-0 podman[257372]: 2026-01-27 08:57:02.847872485 +0000 UTC m=+0.140222739 container init 87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:57:02 compute-0 podman[257372]: 2026-01-27 08:57:02.855240804 +0000 UTC m=+0.147591038 container start 87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:57:02 compute-0 podman[257372]: 2026-01-27 08:57:02.85837444 +0000 UTC m=+0.150724684 container attach 87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:57:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:02.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:03 compute-0 tender_jackson[257389]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:57:03 compute-0 tender_jackson[257389]: --> relative data size: 1.0
Jan 27 08:57:03 compute-0 tender_jackson[257389]: --> All data devices are unavailable
Jan 27 08:57:03 compute-0 ceph-mon[74357]: pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2412578049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:03 compute-0 systemd[1]: libpod-87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50.scope: Deactivated successfully.
Jan 27 08:57:03 compute-0 podman[257372]: 2026-01-27 08:57:03.704094633 +0000 UTC m=+0.996444877 container died 87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b2f54048bffe91eedae0e40ec9998eeda51a664178da239ee62ac47e5d6d05-merged.mount: Deactivated successfully.
Jan 27 08:57:03 compute-0 podman[257372]: 2026-01-27 08:57:03.776177256 +0000 UTC m=+1.068527510 container remove 87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:57:03 compute-0 systemd[1]: libpod-conmon-87744f9ad3905e28c54b165abb2ff61f18f36e456bc7db226d52bcd7e2cbaf50.scope: Deactivated successfully.
Jan 27 08:57:03 compute-0 sudo[257266]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:03 compute-0 sudo[257419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:03 compute-0 sudo[257419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:03 compute-0 sudo[257419]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:03 compute-0 sudo[257444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:57:03 compute-0 sudo[257444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:03 compute-0 sudo[257444]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:04 compute-0 sudo[257469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:04 compute-0 sudo[257469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:04 compute-0 sudo[257469]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:04 compute-0 sudo[257494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:57:04 compute-0 sudo[257494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.452 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.452 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.473 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.473 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.473 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.474 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.474 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.476418219 +0000 UTC m=+0.044034800 container create e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:57:04 compute-0 systemd[1]: Started libpod-conmon-e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9.scope.
Jan 27 08:57:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.540413351 +0000 UTC m=+0.108029952 container init e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.548192173 +0000 UTC m=+0.115808774 container start e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.456175158 +0000 UTC m=+0.023791759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.552306545 +0000 UTC m=+0.119923136 container attach e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:57:04 compute-0 systemd[1]: libpod-e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9.scope: Deactivated successfully.
Jan 27 08:57:04 compute-0 festive_ptolemy[257577]: 167 167
Jan 27 08:57:04 compute-0 conmon[257577]: conmon e854a6acf13e37f5a794 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9.scope/container/memory.events
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.554471254 +0000 UTC m=+0.122087835 container died e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-110bf750ca3f0a57c2afa5221db861da66037e14dc5c262a7c2b01108148abed-merged.mount: Deactivated successfully.
Jan 27 08:57:04 compute-0 podman[257559]: 2026-01-27 08:57:04.598870133 +0000 UTC m=+0.166486734 container remove e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 08:57:04 compute-0 systemd[1]: libpod-conmon-e854a6acf13e37f5a79459d80e1a86692982b9c3b4e86b56b4962f824a17feb9.scope: Deactivated successfully.
Jan 27 08:57:04 compute-0 podman[257574]: 2026-01-27 08:57:04.619636038 +0000 UTC m=+0.104620189 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:57:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:04.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/977743122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:04 compute-0 podman[257645]: 2026-01-27 08:57:04.784609039 +0000 UTC m=+0.057621599 container create e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sanderson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 08:57:04 compute-0 podman[257645]: 2026-01-27 08:57:04.756852423 +0000 UTC m=+0.029864973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:57:04 compute-0 systemd[1]: Started libpod-conmon-e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03.scope.
Jan 27 08:57:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:57:04 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835490452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:04 compute-0 sudo[257659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:04 compute-0 nova_compute[247671]: 2026-01-27 08:57:04.896 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:57:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:57:04 compute-0 sudo[257659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7888827d242bdf962f04b070925ebb6e646581f1abfee38dc55cf117517ae4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:04 compute-0 sudo[257659]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7888827d242bdf962f04b070925ebb6e646581f1abfee38dc55cf117517ae4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7888827d242bdf962f04b070925ebb6e646581f1abfee38dc55cf117517ae4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7888827d242bdf962f04b070925ebb6e646581f1abfee38dc55cf117517ae4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:04 compute-0 podman[257645]: 2026-01-27 08:57:04.946466545 +0000 UTC m=+0.219479155 container init e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 08:57:04 compute-0 podman[257645]: 2026-01-27 08:57:04.956553039 +0000 UTC m=+0.229565579 container start e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sanderson, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:57:04 compute-0 podman[257645]: 2026-01-27 08:57:04.964232809 +0000 UTC m=+0.237245389 container attach e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:57:04 compute-0 sudo[257691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:04 compute-0 sudo[257691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:04.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:04 compute-0 sudo[257691]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.061 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.063 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.063 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.064 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.148 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.148 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.164 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:57:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:57:05 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370212229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.654 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]: {
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:     "0": [
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:         {
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "devices": [
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "/dev/loop3"
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             ],
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "lv_name": "ceph_lv0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "lv_size": "7511998464",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "name": "ceph_lv0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "tags": {
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.cluster_name": "ceph",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.crush_device_class": "",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.encrypted": "0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.osd_id": "0",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.type": "block",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:                 "ceph.vdo": "0"
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             },
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "type": "block",
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:             "vg_name": "ceph_vg0"
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:         }
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]:     ]
Jan 27 08:57:05 compute-0 heuristic_sanderson[257684]: }
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.660 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:57:05 compute-0 systemd[1]: libpod-e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03.scope: Deactivated successfully.
Jan 27 08:57:05 compute-0 podman[257645]: 2026-01-27 08:57:05.679267874 +0000 UTC m=+0.952280414 container died e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.687 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.689 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:57:05 compute-0 nova_compute[247671]: 2026-01-27 08:57:05.689 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f7888827d242bdf962f04b070925ebb6e646581f1abfee38dc55cf117517ae4-merged.mount: Deactivated successfully.
Jan 27 08:57:05 compute-0 podman[257645]: 2026-01-27 08:57:05.735479685 +0000 UTC m=+1.008492225 container remove e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sanderson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:57:05 compute-0 systemd[1]: libpod-conmon-e18c0393557ea5147262f5657eb3b2f7ee974a8cd3d4b09969586b9b1cb11e03.scope: Deactivated successfully.
Jan 27 08:57:05 compute-0 sudo[257494]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:05 compute-0 ceph-mon[74357]: pgmap v995: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3835490452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/416545954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2370212229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1117517094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:57:05 compute-0 sudo[257757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:05 compute-0 sudo[257757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:05 compute-0 sudo[257757]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:05 compute-0 sudo[257782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:57:05 compute-0 sudo[257782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:05 compute-0 sudo[257782]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:05 compute-0 sudo[257807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:05 compute-0 sudo[257807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:05 compute-0 sudo[257807]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:06 compute-0 sudo[257832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:57:06 compute-0 sudo[257832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.302317776 +0000 UTC m=+0.038546780 container create 128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:57:06 compute-0 systemd[1]: Started libpod-conmon-128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948.scope.
Jan 27 08:57:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.361110247 +0000 UTC m=+0.097339271 container init 128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.366618157 +0000 UTC m=+0.102847161 container start 128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.369543186 +0000 UTC m=+0.105772220 container attach 128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 08:57:06 compute-0 boring_gauss[257913]: 167 167
Jan 27 08:57:06 compute-0 systemd[1]: libpod-128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948.scope: Deactivated successfully.
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.371466629 +0000 UTC m=+0.107695643 container died 128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.287240045 +0000 UTC m=+0.023469069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-685b487a7b78ac121a4ab9e2f25b18fc9b9dfa1007ce9c09f1fb800803c80dc9-merged.mount: Deactivated successfully.
Jan 27 08:57:06 compute-0 podman[257897]: 2026-01-27 08:57:06.403241844 +0000 UTC m=+0.139470848 container remove 128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:57:06 compute-0 systemd[1]: libpod-conmon-128fbb93c95cf7d5b0f224e4c0e397783c463e665f9da340b8324d1819056948.scope: Deactivated successfully.
Jan 27 08:57:06 compute-0 podman[257936]: 2026-01-27 08:57:06.566369634 +0000 UTC m=+0.055945284 container create 0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:57:06 compute-0 podman[257936]: 2026-01-27 08:57:06.533700575 +0000 UTC m=+0.023276315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:57:06 compute-0 systemd[1]: Started libpod-conmon-0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0.scope.
Jan 27 08:57:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:06.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c493f800ff0c1e376857e179da771d355f11c91fac2d7e07db88138492073c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c493f800ff0c1e376857e179da771d355f11c91fac2d7e07db88138492073c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c493f800ff0c1e376857e179da771d355f11c91fac2d7e07db88138492073c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c493f800ff0c1e376857e179da771d355f11c91fac2d7e07db88138492073c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:06 compute-0 podman[257936]: 2026-01-27 08:57:06.691930313 +0000 UTC m=+0.181505983 container init 0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:57:06 compute-0 podman[257936]: 2026-01-27 08:57:06.704102864 +0000 UTC m=+0.193678514 container start 0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:57:06 compute-0 podman[257936]: 2026-01-27 08:57:06.708200976 +0000 UTC m=+0.197776716 container attach 0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:57:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 27 08:57:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 27 08:57:06 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 27 08:57:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:06.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]: {
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:         "osd_id": 0,
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:         "type": "bluestore"
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]:     }
Jan 27 08:57:07 compute-0 ecstatic_diffie[257952]: }
Jan 27 08:57:07 compute-0 systemd[1]: libpod-0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0.scope: Deactivated successfully.
Jan 27 08:57:07 compute-0 podman[257936]: 2026-01-27 08:57:07.581374276 +0000 UTC m=+1.070949926 container died 0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c493f800ff0c1e376857e179da771d355f11c91fac2d7e07db88138492073c0-merged.mount: Deactivated successfully.
Jan 27 08:57:07 compute-0 podman[257936]: 2026-01-27 08:57:07.634837432 +0000 UTC m=+1.124413082 container remove 0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:57:07 compute-0 systemd[1]: libpod-conmon-0cc310b7e88bbeaf3d0e3dc96b2e7ba2b3848cb7d036baa23a61e950753d2de0.scope: Deactivated successfully.
Jan 27 08:57:07 compute-0 sudo[257832]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:57:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:57:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 428fe563-863b-480a-a322-68c3c23df16e does not exist
Jan 27 08:57:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 899a8abd-8a5f-4b23-a9c7-5d240f061e26 does not exist
Jan 27 08:57:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 65ae83a7-bf9e-486b-a8bd-95e82da1ef12 does not exist
Jan 27 08:57:07 compute-0 sudo[257986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:07 compute-0 sudo[257986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:07 compute-0 sudo[257986]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:07 compute-0 ceph-mon[74357]: pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:07 compute-0 ceph-mon[74357]: osdmap e140: 3 total, 3 up, 3 in
Jan 27 08:57:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:57:07 compute-0 sudo[258011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:57:07 compute-0 sudo[258011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:07 compute-0 sudo[258011]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 921 B/s wr, 2 op/s
Jan 27 08:57:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:09 compute-0 ceph-mon[74357]: pgmap v998: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 921 B/s wr, 2 op/s
Jan 27 08:57:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 27 08:57:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:10.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 27 08:57:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 27 08:57:10 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 27 08:57:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:57:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:57:11 compute-0 ceph-mon[74357]: pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 27 08:57:11 compute-0 ceph-mon[74357]: osdmap e141: 3 total, 3 up, 3 in
Jan 27 08:57:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 31 op/s
Jan 27 08:57:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:12.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:13 compute-0 ceph-mon[74357]: pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 31 op/s
Jan 27 08:57:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 31 op/s
Jan 27 08:57:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:14.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:14.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:57:15
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', 'backups', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:57:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:57:15 compute-0 ceph-mon[74357]: pgmap v1002: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 31 op/s
Jan 27 08:57:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 36 op/s
Jan 27 08:57:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:17.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 27 08:57:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 27 08:57:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 27 08:57:18 compute-0 ceph-mon[74357]: pgmap v1003: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 36 op/s
Jan 27 08:57:18 compute-0 ceph-mon[74357]: osdmap e142: 3 total, 3 up, 3 in
Jan 27 08:57:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 45 op/s
Jan 27 08:57:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:18.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:19.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:19 compute-0 ceph-mon[74357]: pgmap v1005: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 45 op/s
Jan 27 08:57:19 compute-0 podman[258041]: 2026-01-27 08:57:19.256539365 +0000 UTC m=+0.061176886 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:57:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Jan 27 08:57:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:20.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:21.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:21 compute-0 ceph-mon[74357]: pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Jan 27 08:57:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 14 op/s
Jan 27 08:57:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:57:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:57:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:23.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:23 compute-0 ceph-mon[74357]: pgmap v1007: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 14 op/s
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 14 op/s
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:57:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:57:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:24.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:25.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:25 compute-0 sudo[258063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:25 compute-0 sudo[258063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:25 compute-0 sudo[258063]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:25 compute-0 sudo[258088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:25 compute-0 sudo[258088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:25 compute-0 sudo[258088]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:25 compute-0 ceph-mon[74357]: pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 14 op/s
Jan 27 08:57:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 409 B/s wr, 2 op/s
Jan 27 08:57:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:57:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5267 writes, 22K keys, 5267 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5267 writes, 5267 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1484 writes, 6576 keys, 1484 commit groups, 1.0 writes per commit group, ingest: 10.15 MB, 0.02 MB/s
                                           Interval WAL: 1484 writes, 1484 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    120.9      0.23              0.08        13    0.018       0      0       0.0       0.0
                                             L6      1/0    7.33 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6    155.5    129.2      0.77              0.26        12    0.064     56K   6341       0.0       0.0
                                            Sum      1/0    7.33 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6    120.0    127.3      1.00              0.34        25    0.040     56K   6341       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6    137.3    136.6      0.42              0.16        12    0.035     29K   3050       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    155.5    129.2      0.77              0.26        12    0.064     56K   6341       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    123.7      0.22              0.08        12    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.027, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 1.0 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f59eb431f0#2 capacity: 304.00 MB usage: 10.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000129 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(582,9.60 MB,3.15804%) FilterBlock(26,161.80 KB,0.0519753%) IndexBlock(26,303.67 KB,0.0975508%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 08:57:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:26.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:27.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:27 compute-0 ceph-mon[74357]: pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 409 B/s wr, 2 op/s
Jan 27 08:57:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:28.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:29.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:29 compute-0 ceph-mon[74357]: pgmap v1010: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:31.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:31 compute-0 ceph-mon[74357]: pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:33.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:34 compute-0 ceph-mon[74357]: pgmap v1012: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:35.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:35 compute-0 ceph-mon[74357]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:35 compute-0 podman[258119]: 2026-01-27 08:57:35.267990112 +0000 UTC m=+0.077892132 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 08:57:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:36.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:37.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:37 compute-0 ceph-mon[74357]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:38.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:39.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:40.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:40 compute-0 ceph-mon[74357]: pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:41.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:42 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:57:42.102 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:57:42 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:57:42.102 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:57:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:42 compute-0 ceph-mon[74357]: pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:42.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:43 compute-0 ceph-mon[74357]: pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:44.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:45.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:45 compute-0 sudo[258152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:45 compute-0 sudo[258152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:45 compute-0 sudo[258152]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:45 compute-0 sudo[258177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:57:45 compute-0 sudo[258177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:57:45 compute-0 sudo[258177]: pam_unix(sudo:session): session closed for user root
Jan 27 08:57:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:45 compute-0 ceph-mon[74357]: pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:46.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:47.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:47 compute-0 ceph-mon[74357]: pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 08:57:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:49.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 08:57:49 compute-0 ceph-mon[74357]: pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:50 compute-0 podman[258205]: 2026-01-27 08:57:50.26532545 +0000 UTC m=+0.080513703 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:57:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:50.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:51.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:51 compute-0 ceph-mon[74357]: pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:52 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:57:52.104 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:57:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:53.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:53 compute-0 ceph-mon[74357]: pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:57:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:57:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:57:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:57:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:57:54.240 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:57:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:57:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:54.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:57:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:55.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:55 compute-0 ceph-mon[74357]: pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:57.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:57 compute-0 ceph-mon[74357]: pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:57:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:57:58.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:57:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:57:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:57:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:57:59.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:57:59 compute-0 ceph-mon[74357]: pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:57:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3687963763' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:57:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3687963763' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:58:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:00.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:01.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:01 compute-0 nova_compute[247671]: 2026-01-27 08:58:01.659 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:01 compute-0 nova_compute[247671]: 2026-01-27 08:58:01.660 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:01 compute-0 nova_compute[247671]: 2026-01-27 08:58:01.660 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:58:01 compute-0 ceph-mon[74357]: pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:02 compute-0 nova_compute[247671]: 2026-01-27 08:58:02.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:02 compute-0 nova_compute[247671]: 2026-01-27 08:58:02.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:02 compute-0 nova_compute[247671]: 2026-01-27 08:58:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:02 compute-0 nova_compute[247671]: 2026-01-27 08:58:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:03.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:03 compute-0 ceph-mon[74357]: pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3467235544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:04 compute-0 nova_compute[247671]: 2026-01-27 08:58:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:04 compute-0 nova_compute[247671]: 2026-01-27 08:58:04.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:58:04 compute-0 nova_compute[247671]: 2026-01-27 08:58:04.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:58:04 compute-0 nova_compute[247671]: 2026-01-27 08:58:04.472 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:58:04 compute-0 nova_compute[247671]: 2026-01-27 08:58:04.472 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 27 08:58:04 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 27 08:58:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4119943515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:04 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 27 08:58:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:05.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:05 compute-0 sudo[258231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:05 compute-0 sudo[258231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:05 compute-0 sudo[258231]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:05 compute-0 sudo[258262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:05 compute-0 sudo[258262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:05 compute-0 sudo[258262]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:05 compute-0 podman[258255]: 2026-01-27 08:58:05.45987567 +0000 UTC m=+0.124795618 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 08:58:05 compute-0 ceph-mon[74357]: pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:05 compute-0 ceph-mon[74357]: osdmap e143: 3 total, 3 up, 3 in
Jan 27 08:58:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 921 B/s wr, 4 op/s
Jan 27 08:58:06 compute-0 nova_compute[247671]: 2026-01-27 08:58:06.425 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:06 compute-0 nova_compute[247671]: 2026-01-27 08:58:06.468 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:58:06 compute-0 nova_compute[247671]: 2026-01-27 08:58:06.469 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:58:06 compute-0 nova_compute[247671]: 2026-01-27 08:58:06.469 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:58:06 compute-0 nova_compute[247671]: 2026-01-27 08:58:06.469 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:58:06 compute-0 nova_compute[247671]: 2026-01-27 08:58:06.470 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:58:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:06.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1287802820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:58:06 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237486612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.005 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:58:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:07.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.211 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.212 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.213 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.213 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.276 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.276 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.293 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:58:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:58:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059008209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.703 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.709 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.728 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.731 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:58:07 compute-0 nova_compute[247671]: 2026-01-27 08:58:07.731 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:58:07 compute-0 ceph-mon[74357]: pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 921 B/s wr, 4 op/s
Jan 27 08:58:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2361479154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2237486612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1059008209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:58:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:08 compute-0 sudo[258351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:08 compute-0 sudo[258351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:08 compute-0 sudo[258351]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:08 compute-0 sudo[258376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:58:08 compute-0 sudo[258376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:08 compute-0 sudo[258376]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:08 compute-0 sudo[258401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:08 compute-0 sudo[258401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:08 compute-0 sudo[258401]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:08 compute-0 sudo[258426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:58:08 compute-0 sudo[258426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:08.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:08 compute-0 sudo[258426]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:58:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:58:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:58:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:58:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:58:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:58:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8b4e06a9-2e62-4695-a4f1-24e0b2e7a6db does not exist
Jan 27 08:58:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 69ac5e18-e183-4975-acd8-f1e45194bc46 does not exist
Jan 27 08:58:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 35484c3b-5bdd-4f55-ba9e-1c20dfa78417 does not exist
Jan 27 08:58:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:58:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:58:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:58:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:58:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:58:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:58:09 compute-0 sudo[258483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:09 compute-0 sudo[258483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:09 compute-0 sudo[258483]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:09.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:09 compute-0 sudo[258508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:58:09 compute-0 sudo[258508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:09 compute-0 sudo[258508]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:09 compute-0 sudo[258533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:09 compute-0 sudo[258533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:09 compute-0 sudo[258533]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:09 compute-0 sudo[258558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:58:09 compute-0 sudo[258558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.61111391 +0000 UTC m=+0.044334568 container create 915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lederberg, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 08:58:09 compute-0 systemd[1]: Started libpod-conmon-915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad.scope.
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.588437383 +0000 UTC m=+0.021658031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.702135248 +0000 UTC m=+0.135355966 container init 915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.710875297 +0000 UTC m=+0.144095955 container start 915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lederberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.714625308 +0000 UTC m=+0.147845966 container attach 915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:58:09 compute-0 hungry_lederberg[258641]: 167 167
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.717079455 +0000 UTC m=+0.150300093 container died 915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:09 compute-0 systemd[1]: libpod-915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad.scope: Deactivated successfully.
Jan 27 08:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-df2f21f62987484f03964ac05e0ee61f24dc32252543886772ef0f9837ed0721-merged.mount: Deactivated successfully.
Jan 27 08:58:09 compute-0 podman[258623]: 2026-01-27 08:58:09.749833306 +0000 UTC m=+0.183053924 container remove 915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:09 compute-0 systemd[1]: libpod-conmon-915a2cdc6ea121e028122a944b17710c307685d9893de8a823a77dbadc9df8ad.scope: Deactivated successfully.
Jan 27 08:58:09 compute-0 ceph-mon[74357]: pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:58:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:58:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:58:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:58:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:58:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:58:09 compute-0 podman[258665]: 2026-01-27 08:58:09.956982656 +0000 UTC m=+0.038169060 container create 7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 27 08:58:09 compute-0 systemd[1]: Started libpod-conmon-7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d.scope.
Jan 27 08:58:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942f450c85c2a383985277b770ce39a960dc0013d37dafad3dd8ccb0feeff588/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942f450c85c2a383985277b770ce39a960dc0013d37dafad3dd8ccb0feeff588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942f450c85c2a383985277b770ce39a960dc0013d37dafad3dd8ccb0feeff588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942f450c85c2a383985277b770ce39a960dc0013d37dafad3dd8ccb0feeff588/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942f450c85c2a383985277b770ce39a960dc0013d37dafad3dd8ccb0feeff588/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:10 compute-0 podman[258665]: 2026-01-27 08:58:10.03645939 +0000 UTC m=+0.117645784 container init 7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:58:10 compute-0 podman[258665]: 2026-01-27 08:58:09.939356736 +0000 UTC m=+0.020543140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:58:10 compute-0 podman[258665]: 2026-01-27 08:58:10.04493035 +0000 UTC m=+0.126116734 container start 7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 27 08:58:10 compute-0 podman[258665]: 2026-01-27 08:58:10.04898924 +0000 UTC m=+0.130175614 container attach 7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:10.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:10 compute-0 optimistic_bose[258682]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:58:10 compute-0 optimistic_bose[258682]: --> relative data size: 1.0
Jan 27 08:58:10 compute-0 optimistic_bose[258682]: --> All data devices are unavailable
Jan 27 08:58:10 compute-0 systemd[1]: libpod-7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d.scope: Deactivated successfully.
Jan 27 08:58:10 compute-0 conmon[258682]: conmon 7c5fc0cd440c4b191eda <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d.scope/container/memory.events
Jan 27 08:58:10 compute-0 podman[258665]: 2026-01-27 08:58:10.805763752 +0000 UTC m=+0.886950226 container died 7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-942f450c85c2a383985277b770ce39a960dc0013d37dafad3dd8ccb0feeff588-merged.mount: Deactivated successfully.
Jan 27 08:58:10 compute-0 podman[258665]: 2026-01-27 08:58:10.868428409 +0000 UTC m=+0.949614793 container remove 7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:10 compute-0 systemd[1]: libpod-conmon-7c5fc0cd440c4b191eda1290335ee856a53620918ac2c9dbcd4aa3a3efba303d.scope: Deactivated successfully.
Jan 27 08:58:10 compute-0 sudo[258558]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:10 compute-0 sudo[258712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:10 compute-0 sudo[258712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:10 compute-0 sudo[258712]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:11 compute-0 sudo[258737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:58:11 compute-0 sudo[258737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:11 compute-0 sudo[258737]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:11.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:11 compute-0 sudo[258762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:11 compute-0 sudo[258762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:11 compute-0 sudo[258762]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:11 compute-0 sudo[258787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:58:11 compute-0 sudo[258787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.566457101 +0000 UTC m=+0.034229182 container create 1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:58:11 compute-0 systemd[1]: Started libpod-conmon-1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5.scope.
Jan 27 08:58:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.630229828 +0000 UTC m=+0.098001929 container init 1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.636148549 +0000 UTC m=+0.103920630 container start 1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.638604846 +0000 UTC m=+0.106376927 container attach 1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:58:11 compute-0 gracious_babbage[258865]: 167 167
Jan 27 08:58:11 compute-0 systemd[1]: libpod-1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5.scope: Deactivated successfully.
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.640411814 +0000 UTC m=+0.108183905 container died 1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.551418042 +0000 UTC m=+0.019190153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b0ed4683a7156f9734d37da6c5e5f10a989c02cd0e0c51c529eb33b5dd2418-merged.mount: Deactivated successfully.
Jan 27 08:58:11 compute-0 podman[258848]: 2026-01-27 08:58:11.67072397 +0000 UTC m=+0.138496051 container remove 1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:58:11 compute-0 systemd[1]: libpod-conmon-1b1e1bc38408ffa73ee5891774f662e2ea89daf46f64820f80a4fa676cefa7f5.scope: Deactivated successfully.
Jan 27 08:58:11 compute-0 podman[258891]: 2026-01-27 08:58:11.821857724 +0000 UTC m=+0.049113207 container create b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:58:11 compute-0 ceph-mon[74357]: pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:11 compute-0 systemd[1]: Started libpod-conmon-b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d.scope.
Jan 27 08:58:11 compute-0 podman[258891]: 2026-01-27 08:58:11.807961546 +0000 UTC m=+0.035217039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:58:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9584f02db711032f1a7a7d8f6971f53dc67ba5299bd72c692276028734ddad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9584f02db711032f1a7a7d8f6971f53dc67ba5299bd72c692276028734ddad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9584f02db711032f1a7a7d8f6971f53dc67ba5299bd72c692276028734ddad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9584f02db711032f1a7a7d8f6971f53dc67ba5299bd72c692276028734ddad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:11 compute-0 podman[258891]: 2026-01-27 08:58:11.916324266 +0000 UTC m=+0.143579779 container init b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:11 compute-0 podman[258891]: 2026-01-27 08:58:11.923697706 +0000 UTC m=+0.150953179 container start b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:58:11 compute-0 podman[258891]: 2026-01-27 08:58:11.931950901 +0000 UTC m=+0.159206424 container attach b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:58:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:12 compute-0 reverent_bartik[258907]: {
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:     "0": [
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:         {
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "devices": [
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "/dev/loop3"
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             ],
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "lv_name": "ceph_lv0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "lv_size": "7511998464",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "name": "ceph_lv0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "tags": {
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.cluster_name": "ceph",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.crush_device_class": "",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.encrypted": "0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.osd_id": "0",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.type": "block",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:                 "ceph.vdo": "0"
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             },
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "type": "block",
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:             "vg_name": "ceph_vg0"
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:         }
Jan 27 08:58:12 compute-0 reverent_bartik[258907]:     ]
Jan 27 08:58:12 compute-0 reverent_bartik[258907]: }
Jan 27 08:58:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:58:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:58:12 compute-0 systemd[1]: libpod-b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d.scope: Deactivated successfully.
Jan 27 08:58:12 compute-0 podman[258891]: 2026-01-27 08:58:12.735873117 +0000 UTC m=+0.963128640 container died b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9584f02db711032f1a7a7d8f6971f53dc67ba5299bd72c692276028734ddad-merged.mount: Deactivated successfully.
Jan 27 08:58:12 compute-0 podman[258891]: 2026-01-27 08:58:12.932959052 +0000 UTC m=+1.160214535 container remove b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:58:12 compute-0 systemd[1]: libpod-conmon-b74cc81dd9fb39785931d1e267e4ed60af63a65a69d3191eef34022471efb98d.scope: Deactivated successfully.
Jan 27 08:58:12 compute-0 sudo[258787]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:13 compute-0 sudo[258927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:13 compute-0 sudo[258927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:13 compute-0 sudo[258927]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:13 compute-0 sudo[258952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:58:13 compute-0 sudo[258952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:13 compute-0 sudo[258952]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:13 compute-0 sudo[258977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:13 compute-0 sudo[258977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:13 compute-0 sudo[258977]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:13 compute-0 sudo[259002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:58:13 compute-0 sudo[259002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:13 compute-0 podman[259067]: 2026-01-27 08:58:13.643167826 +0000 UTC m=+0.048439309 container create 7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:58:13 compute-0 systemd[1]: Started libpod-conmon-7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11.scope.
Jan 27 08:58:13 compute-0 podman[259067]: 2026-01-27 08:58:13.616996164 +0000 UTC m=+0.022267657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:58:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:58:13 compute-0 podman[259067]: 2026-01-27 08:58:13.853578774 +0000 UTC m=+0.258850297 container init 7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 08:58:13 compute-0 podman[259067]: 2026-01-27 08:58:13.860779831 +0000 UTC m=+0.266051304 container start 7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:58:13 compute-0 youthful_torvalds[259084]: 167 167
Jan 27 08:58:13 compute-0 systemd[1]: libpod-7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11.scope: Deactivated successfully.
Jan 27 08:58:13 compute-0 podman[259067]: 2026-01-27 08:58:13.897632724 +0000 UTC m=+0.302904197 container attach 7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:58:13 compute-0 podman[259067]: 2026-01-27 08:58:13.898696382 +0000 UTC m=+0.303967875 container died 7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 08:58:13 compute-0 ceph-mon[74357]: pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-912de657dd4690cd725a1c62356f617ee8f057eb5c4a3b95dd259a0507e42067-merged.mount: Deactivated successfully.
Jan 27 08:58:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:14 compute-0 podman[259067]: 2026-01-27 08:58:14.283650423 +0000 UTC m=+0.688921916 container remove 7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:14 compute-0 systemd[1]: libpod-conmon-7bf7c580e5408f65912006098ac58ad49a7f16f7427da172b40dd7c12df3dc11.scope: Deactivated successfully.
Jan 27 08:58:14 compute-0 podman[259110]: 2026-01-27 08:58:14.491913462 +0000 UTC m=+0.051545253 container create 2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 08:58:14 compute-0 systemd[1]: Started libpod-conmon-2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3.scope.
Jan 27 08:58:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202ecd9e3dac9b47926501fd88db13d7db65c3ba2db3baca8027a55a090c9d2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:14 compute-0 podman[259110]: 2026-01-27 08:58:14.467243411 +0000 UTC m=+0.026875242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202ecd9e3dac9b47926501fd88db13d7db65c3ba2db3baca8027a55a090c9d2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202ecd9e3dac9b47926501fd88db13d7db65c3ba2db3baca8027a55a090c9d2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202ecd9e3dac9b47926501fd88db13d7db65c3ba2db3baca8027a55a090c9d2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:14 compute-0 podman[259110]: 2026-01-27 08:58:14.573223716 +0000 UTC m=+0.132855537 container init 2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 08:58:14 compute-0 podman[259110]: 2026-01-27 08:58:14.586664332 +0000 UTC m=+0.146296123 container start 2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:14 compute-0 podman[259110]: 2026-01-27 08:58:14.589953951 +0000 UTC m=+0.149585772 container attach 2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:58:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:58:15
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', '.mgr', 'backups']
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:15.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:58:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:15 compute-0 kind_hamilton[259127]: {
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:         "osd_id": 0,
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:         "type": "bluestore"
Jan 27 08:58:15 compute-0 kind_hamilton[259127]:     }
Jan 27 08:58:15 compute-0 kind_hamilton[259127]: }
Jan 27 08:58:15 compute-0 podman[259110]: 2026-01-27 08:58:15.453371696 +0000 UTC m=+1.013003497 container died 2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:58:15 compute-0 systemd[1]: libpod-2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3.scope: Deactivated successfully.
Jan 27 08:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-202ecd9e3dac9b47926501fd88db13d7db65c3ba2db3baca8027a55a090c9d2d-merged.mount: Deactivated successfully.
Jan 27 08:58:15 compute-0 podman[259110]: 2026-01-27 08:58:15.504335105 +0000 UTC m=+1.063966916 container remove 2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:58:15 compute-0 systemd[1]: libpod-conmon-2f9c7f6cc39d1ac3d5d8006afb7a566b52c31d8a00b45a03b32bf3916130e3b3.scope: Deactivated successfully.
Jan 27 08:58:15 compute-0 sudo[259002]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:58:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:58:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:58:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9c89ba09-872d-4624-b909-d6885f94715d does not exist
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3f54377f-45be-4700-b366-edda669e571b does not exist
Jan 27 08:58:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d6411fa7-ba01-4d23-af79-d536f5ae78c7 does not exist
Jan 27 08:58:15 compute-0 sudo[259161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:15 compute-0 sudo[259161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:15 compute-0 sudo[259161]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:15 compute-0 sudo[259186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:58:15 compute-0 sudo[259186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:15 compute-0 sudo[259186]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:15 compute-0 ceph-mon[74357]: pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:58:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:58:15 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:58:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 738 KiB/s rd, 1.5 KiB/s wr, 19 op/s
Jan 27 08:58:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:16.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 27 08:58:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 27 08:58:17 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 27 08:58:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:17.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:18 compute-0 ceph-mon[74357]: pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 738 KiB/s rd, 1.5 KiB/s wr, 19 op/s
Jan 27 08:58:18 compute-0 ceph-mon[74357]: osdmap e144: 3 total, 3 up, 3 in
Jan 27 08:58:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 307 B/s wr, 8 op/s
Jan 27 08:58:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:18.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 9 op/s
Jan 27 08:58:20 compute-0 ceph-mon[74357]: pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 307 B/s wr, 8 op/s
Jan 27 08:58:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:20.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:21.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:21 compute-0 ceph-mon[74357]: pgmap v1038: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 9 op/s
Jan 27 08:58:21 compute-0 podman[259213]: 2026-01-27 08:58:21.234504198 +0000 UTC m=+0.052103619 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 08:58:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 27 08:58:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:22.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:23 compute-0 ceph-mon[74357]: pgmap v1039: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028553502466033874 of space, bias 1.0, pg target 0.8566050739810163 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:58:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:58:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:24.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:25 compute-0 ceph-mon[74357]: pgmap v1040: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 27 08:58:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:25 compute-0 sudo[259235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:25 compute-0 sudo[259235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:25 compute-0 sudo[259235]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:25 compute-0 sudo[259260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:25 compute-0 sudo[259260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:25 compute-0 sudo[259260]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 20 op/s
Jan 27 08:58:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:26.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:27 compute-0 ceph-mon[74357]: pgmap v1041: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 20 op/s
Jan 27 08:58:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 750 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 27 08:58:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:28.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:29.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:29 compute-0 ceph-mon[74357]: pgmap v1042: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 750 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 27 08:58:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 693 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 27 08:58:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:31 compute-0 ceph-mon[74357]: pgmap v1043: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 693 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 27 08:58:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 684 KiB/s wr, 14 op/s
Jan 27 08:58:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 27 08:58:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 27 08:58:32 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 27 08:58:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:33.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:33 compute-0 ceph-mon[74357]: pgmap v1044: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 684 KiB/s wr, 14 op/s
Jan 27 08:58:33 compute-0 ceph-mon[74357]: osdmap e145: 3 total, 3 up, 3 in
Jan 27 08:58:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:34.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:35.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:35 compute-0 ceph-mon[74357]: pgmap v1046: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Jan 27 08:58:36 compute-0 podman[259291]: 2026-01-27 08:58:36.290815081 +0000 UTC m=+0.102683227 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 08:58:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 08:58:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:36.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 08:58:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:37.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:37 compute-0 ceph-mon[74357]: pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Jan 27 08:58:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:38.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:39.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:39 compute-0 ceph-mon[74357]: pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 27 08:58:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 27 08:58:40 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 27 08:58:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:40.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:41.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:41 compute-0 ceph-mon[74357]: pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:41 compute-0 ceph-mon[74357]: osdmap e146: 3 total, 3 up, 3 in
Jan 27 08:58:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:42.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:43.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:43 compute-0 ceph-mon[74357]: pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:44.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:45.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:45 compute-0 ceph-mon[74357]: pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:58:45 compute-0 sudo[259321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:45 compute-0 sudo[259321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:45 compute-0 sudo[259321]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:45 compute-0 sudo[259347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:58:45 compute-0 sudo[259347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:58:45 compute-0 sudo[259347]: pam_unix(sudo:session): session closed for user root
Jan 27 08:58:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Jan 27 08:58:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:46.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:46 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:58:46.910 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 08:58:46 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:58:46.911 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 08:58:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:47.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:47 compute-0 ceph-mon[74357]: pgmap v1053: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Jan 27 08:58:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:48 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:58:48.913 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 08:58:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:50 compute-0 ceph-mon[74357]: pgmap v1054: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:51.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 27 08:58:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 27 08:58:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 27 08:58:51 compute-0 ceph-mon[74357]: pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:58:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s
Jan 27 08:58:52 compute-0 podman[259375]: 2026-01-27 08:58:52.262031251 +0000 UTC m=+0.075556098 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:58:52 compute-0 ceph-mon[74357]: osdmap e147: 3 total, 3 up, 3 in
Jan 27 08:58:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:52.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:53 compute-0 ceph-mon[74357]: pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s
Jan 27 08:58:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s
Jan 27 08:58:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:58:54.241 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:58:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:58:54.241 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:58:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:58:54.241 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:58:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:54.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:55.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 27 08:58:55 compute-0 ceph-mon[74357]: pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 716 B/s wr, 3 op/s
Jan 27 08:58:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 27 08:58:56 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 27 08:58:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 KiB/s wr, 22 op/s
Jan 27 08:58:56 compute-0 nova_compute[247671]: 2026-01-27 08:58:56.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:58:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:57 compute-0 ceph-mon[74357]: osdmap e148: 3 total, 3 up, 3 in
Jan 27 08:58:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:57.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:58 compute-0 ceph-mon[74357]: pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 KiB/s wr, 22 op/s
Jan 27 08:58:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 KiB/s wr, 31 op/s
Jan 27 08:58:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:58:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:58:58.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:58:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:58:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:58:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:58:59.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:58:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 08:58:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833000306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:58:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 08:58:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833000306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:58:59 compute-0 ceph-mon[74357]: pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 KiB/s wr, 31 op/s
Jan 27 08:59:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 25 op/s
Jan 27 08:59:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:00 compute-0 nova_compute[247671]: 2026-01-27 08:59:00.484 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:00 compute-0 nova_compute[247671]: 2026-01-27 08:59:00.484 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 08:59:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3833000306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:59:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3833000306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 08:59:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:00.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:01.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:01 compute-0 nova_compute[247671]: 2026-01-27 08:59:01.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 27 08:59:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 27 08:59:01 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.721046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504341721333, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2074, "num_deletes": 252, "total_data_size": 3796869, "memory_usage": 3865264, "flush_reason": "Manual Compaction"}
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504341796679, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3731170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22097, "largest_seqno": 24170, "table_properties": {"data_size": 3721705, "index_size": 6023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19085, "raw_average_key_size": 20, "raw_value_size": 3702864, "raw_average_value_size": 3947, "num_data_blocks": 267, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504128, "oldest_key_time": 1769504128, "file_creation_time": 1769504341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 75660 microseconds, and 8585 cpu microseconds.
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:59:01 compute-0 ceph-mon[74357]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 25 op/s
Jan 27 08:59:01 compute-0 ceph-mon[74357]: osdmap e149: 3 total, 3 up, 3 in
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.796726) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3731170 bytes OK
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.796748) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.806609) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.806632) EVENT_LOG_v1 {"time_micros": 1769504341806626, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.806650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3788495, prev total WAL file size 3790552, number of live WAL files 2.
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.807533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3643KB)], [53(7501KB)]
Jan 27 08:59:01 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504341807582, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11412771, "oldest_snapshot_seqno": -1}
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4852 keys, 9380189 bytes, temperature: kUnknown
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504342052168, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9380189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9346250, "index_size": 20709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122479, "raw_average_key_size": 25, "raw_value_size": 9256838, "raw_average_value_size": 1907, "num_data_blocks": 850, "num_entries": 4852, "num_filter_entries": 4852, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.053625) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9380189 bytes
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.119539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.7 rd, 38.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 5374, records dropped: 522 output_compression: NoCompression
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.119583) EVENT_LOG_v1 {"time_micros": 1769504342119567, "job": 28, "event": "compaction_finished", "compaction_time_micros": 244642, "compaction_time_cpu_micros": 20838, "output_level": 6, "num_output_files": 1, "total_output_size": 9380189, "num_input_records": 5374, "num_output_records": 4852, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504342120517, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504342121978, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:01.807471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.122050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.122054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.122056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.122057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:59:02 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-08:59:02.122059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:59:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 4.4 KiB/s wr, 43 op/s
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.453 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.454 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.454 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.555 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.555 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:02 compute-0 nova_compute[247671]: 2026-01-27 08:59:02.555 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 08:59:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:02.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:03 compute-0 nova_compute[247671]: 2026-01-27 08:59:03.557 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.6 KiB/s wr, 25 op/s
Jan 27 08:59:04 compute-0 ceph-mon[74357]: pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 4.4 KiB/s wr, 43 op/s
Jan 27 08:59:04 compute-0 nova_compute[247671]: 2026-01-27 08:59:04.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:04 compute-0 nova_compute[247671]: 2026-01-27 08:59:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:04.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:05 compute-0 nova_compute[247671]: 2026-01-27 08:59:05.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:05 compute-0 nova_compute[247671]: 2026-01-27 08:59:05.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 08:59:05 compute-0 nova_compute[247671]: 2026-01-27 08:59:05.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 08:59:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:05 compute-0 nova_compute[247671]: 2026-01-27 08:59:05.453 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 08:59:05 compute-0 ceph-mon[74357]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.6 KiB/s wr, 25 op/s
Jan 27 08:59:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2051059385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:05 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2663991727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:05 compute-0 sudo[259402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:05 compute-0 sudo[259402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:05 compute-0 sudo[259402]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:05 compute-0 sudo[259427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:05 compute-0 sudo[259427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:05 compute-0 sudo[259427]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.6 KiB/s wr, 26 op/s
Jan 27 08:59:06 compute-0 nova_compute[247671]: 2026-01-27 08:59:06.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:06.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:07.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:07 compute-0 podman[259452]: 2026-01-27 08:59:07.310950664 +0000 UTC m=+0.114682394 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.469 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.469 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.469 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.470 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.470 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:59:07 compute-0 ceph-mon[74357]: pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.6 KiB/s wr, 26 op/s
Jan 27 08:59:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1054747622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1839595571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:59:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436243366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:07 compute-0 nova_compute[247671]: 2026-01-27 08:59:07.931 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.088 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.089 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5209MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.089 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.090 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:59:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 19 op/s
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.184 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 5339f171-08f3-4567-86e4-06108043f7a5 has allocations against this compute host but is not found in the database.
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.184 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.185 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.225 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 08:59:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 08:59:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701979594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.694 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.701 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.751 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.753 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 08:59:08 compute-0 nova_compute[247671]: 2026-01-27 08:59:08.753 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:59:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:08.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3436243366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/701979594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 08:59:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:09.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:10 compute-0 ceph-mon[74357]: pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 19 op/s
Jan 27 08:59:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 19 op/s
Jan 27 08:59:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:10.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:11 compute-0 ceph-mon[74357]: pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 19 op/s
Jan 27 08:59:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:11.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 579 B/s wr, 6 op/s
Jan 27 08:59:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 27 08:59:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 27 08:59:12 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 27 08:59:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:12.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:13.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:13 compute-0 ceph-mon[74357]: pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 579 B/s wr, 6 op/s
Jan 27 08:59:13 compute-0 ceph-mon[74357]: osdmap e150: 3 total, 3 up, 3 in
Jan 27 08:59:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 614 B/s wr, 6 op/s
Jan 27 08:59:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:14.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_08:59:15
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log']
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:15.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:59:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:59:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 27 08:59:15 compute-0 ceph-mon[74357]: pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 614 B/s wr, 6 op/s
Jan 27 08:59:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 27 08:59:15 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 27 08:59:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 202 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 27 08:59:16 compute-0 sudo[259528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:16 compute-0 sudo[259528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259528]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:16 compute-0 sudo[259553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:59:16 compute-0 sudo[259553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259553]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:16 compute-0 sudo[259578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:16 compute-0 sudo[259578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259578]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:16 compute-0 sudo[259603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 27 08:59:16 compute-0 sudo[259603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259603]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:59:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:59:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:16.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:16 compute-0 sudo[259651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:16 compute-0 sudo[259651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259651]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:16 compute-0 sudo[259676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:59:16 compute-0 sudo[259676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259676]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:16 compute-0 sudo[259701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:16 compute-0 sudo[259701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:16 compute-0 sudo[259701]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:17 compute-0 sudo[259726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 08:59:17 compute-0 sudo[259726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:17 compute-0 ceph-mon[74357]: osdmap e151: 3 total, 3 up, 3 in
Jan 27 08:59:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:17.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 08:59:17 compute-0 sudo[259726]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 08:59:17 compute-0 sudo[259781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:17 compute-0 sudo[259781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:17 compute-0 sudo[259781]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:17 compute-0 sudo[259806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:59:17 compute-0 sudo[259806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:17 compute-0 sudo[259806]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:17 compute-0 sudo[259831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:17 compute-0 sudo[259831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:17 compute-0 sudo[259831]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:17 compute-0 sudo[259857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 27 08:59:17 compute-0 sudo[259857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:17 compute-0 sudo[259857]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:59:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 202 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 42 op/s
Jan 27 08:59:18 compute-0 ceph-mon[74357]: pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 202 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 27 08:59:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:18 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:18 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 12d28b07-fb5a-4345-bb75-23a108071331 does not exist
Jan 27 08:59:18 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8eba8a0f-4271-477d-858e-10cdc2fb6c42 does not exist
Jan 27 08:59:18 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5f78cb8b-fb32-4aad-a622-8a10295be23d does not exist
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:59:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 08:59:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:59:18 compute-0 sudo[259899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:18 compute-0 sudo[259899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:18 compute-0 sudo[259899]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:18 compute-0 sudo[259924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:59:18 compute-0 sudo[259924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:18 compute-0 sudo[259924]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:18.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:18 compute-0 sudo[259949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:18 compute-0 sudo[259949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:18 compute-0 sudo[259949]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:18 compute-0 sudo[259974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 08:59:18 compute-0 sudo[259974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:19.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:19 compute-0 ceph-mon[74357]: pgmap v1074: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 202 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 42 op/s
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 08:59:19 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 08:59:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.342509334 +0000 UTC m=+0.062980856 container create 9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_napier, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.309581628 +0000 UTC m=+0.030053180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:59:19 compute-0 systemd[1]: Started libpod-conmon-9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f.scope.
Jan 27 08:59:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.511634538 +0000 UTC m=+0.232106070 container init 9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_napier, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.519727899 +0000 UTC m=+0.240199421 container start 9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_napier, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.523531872 +0000 UTC m=+0.244003414 container attach 9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:59:19 compute-0 reverent_napier[260056]: 167 167
Jan 27 08:59:19 compute-0 systemd[1]: libpod-9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f.scope: Deactivated successfully.
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.525137806 +0000 UTC m=+0.245609318 container died 9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1055e9f919f9047bdc433fd71c86c6e4a4be1d56eeaf5ba78271926ad8d3225a-merged.mount: Deactivated successfully.
Jan 27 08:59:19 compute-0 podman[260040]: 2026-01-27 08:59:19.574366435 +0000 UTC m=+0.294837947 container remove 9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:59:19 compute-0 systemd[1]: libpod-conmon-9f32299209255fa04f4089cf0f7ddb3dc9ce52245d0c027e40c03b36bf5e785f.scope: Deactivated successfully.
Jan 27 08:59:19 compute-0 podman[260080]: 2026-01-27 08:59:19.73025181 +0000 UTC m=+0.042143259 container create e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:59:19 compute-0 systemd[1]: Started libpod-conmon-e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496.scope.
Jan 27 08:59:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3c497ce568e0a56a98cf21681b43c44766e4c49f18a3f86b0f2e22b20c4acf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:19 compute-0 podman[260080]: 2026-01-27 08:59:19.711903421 +0000 UTC m=+0.023794900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3c497ce568e0a56a98cf21681b43c44766e4c49f18a3f86b0f2e22b20c4acf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3c497ce568e0a56a98cf21681b43c44766e4c49f18a3f86b0f2e22b20c4acf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3c497ce568e0a56a98cf21681b43c44766e4c49f18a3f86b0f2e22b20c4acf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3c497ce568e0a56a98cf21681b43c44766e4c49f18a3f86b0f2e22b20c4acf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:19 compute-0 podman[260080]: 2026-01-27 08:59:19.827475687 +0000 UTC m=+0.139367216 container init e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 08:59:19 compute-0 podman[260080]: 2026-01-27 08:59:19.833722787 +0000 UTC m=+0.145614236 container start e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 08:59:19 compute-0 podman[260080]: 2026-01-27 08:59:19.837747426 +0000 UTC m=+0.149638955 container attach e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 08:59:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 202 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 KiB/s wr, 52 op/s
Jan 27 08:59:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 27 08:59:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 27 08:59:20 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 27 08:59:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 27 08:59:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 27 08:59:20 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 27 08:59:20 compute-0 romantic_murdock[260096]: --> passed data devices: 0 physical, 1 LVM
Jan 27 08:59:20 compute-0 romantic_murdock[260096]: --> relative data size: 1.0
Jan 27 08:59:20 compute-0 romantic_murdock[260096]: --> All data devices are unavailable
Jan 27 08:59:20 compute-0 systemd[1]: libpod-e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496.scope: Deactivated successfully.
Jan 27 08:59:20 compute-0 podman[260080]: 2026-01-27 08:59:20.691737124 +0000 UTC m=+1.003628593 container died e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 08:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d3c497ce568e0a56a98cf21681b43c44766e4c49f18a3f86b0f2e22b20c4acf-merged.mount: Deactivated successfully.
Jan 27 08:59:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:20 compute-0 podman[260080]: 2026-01-27 08:59:20.795266533 +0000 UTC m=+1.107158022 container remove e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:59:20 compute-0 systemd[1]: libpod-conmon-e8d7ddfc62c257f615f7fd16b8adbcdc027c0cc459eb9fa31705d1b32bf2b496.scope: Deactivated successfully.
Jan 27 08:59:20 compute-0 sudo[259974]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:20 compute-0 sudo[260126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:20 compute-0 sudo[260126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:20 compute-0 sudo[260126]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:20 compute-0 sudo[260151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:59:20 compute-0 sudo[260151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:20 compute-0 sudo[260151]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:21 compute-0 sudo[260176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:21 compute-0 sudo[260176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:21 compute-0 sudo[260176]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:21 compute-0 sudo[260201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 08:59:21 compute-0 sudo[260201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:21.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:21 compute-0 ceph-mon[74357]: pgmap v1075: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 202 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 KiB/s wr, 52 op/s
Jan 27 08:59:21 compute-0 ceph-mon[74357]: osdmap e152: 3 total, 3 up, 3 in
Jan 27 08:59:21 compute-0 ceph-mon[74357]: osdmap e153: 3 total, 3 up, 3 in
Jan 27 08:59:21 compute-0 podman[260264]: 2026-01-27 08:59:21.403308286 +0000 UTC m=+0.072571757 container create 465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hopper, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 27 08:59:21 compute-0 podman[260264]: 2026-01-27 08:59:21.352071751 +0000 UTC m=+0.021335232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:59:21 compute-0 systemd[1]: Started libpod-conmon-465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a.scope.
Jan 27 08:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:59:21 compute-0 podman[260264]: 2026-01-27 08:59:21.596336891 +0000 UTC m=+0.265600372 container init 465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hopper, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:21 compute-0 podman[260264]: 2026-01-27 08:59:21.608824432 +0000 UTC m=+0.278087893 container start 465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hopper, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:59:21 compute-0 hardcore_hopper[260280]: 167 167
Jan 27 08:59:21 compute-0 systemd[1]: libpod-465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a.scope: Deactivated successfully.
Jan 27 08:59:21 compute-0 podman[260264]: 2026-01-27 08:59:21.642024606 +0000 UTC m=+0.311288077 container attach 465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hopper, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:59:21 compute-0 podman[260264]: 2026-01-27 08:59:21.643076884 +0000 UTC m=+0.312340355 container died 465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9848a408c220ce5a2e0893174378a9125d141ab0f4c527b7f787e505e6773ccd-merged.mount: Deactivated successfully.
Jan 27 08:59:22 compute-0 podman[260264]: 2026-01-27 08:59:22.070972022 +0000 UTC m=+0.740235483 container remove 465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:59:22 compute-0 systemd[1]: libpod-conmon-465c9477642f8a12ea6c5e20ad06c929090369dde0e3409a2cebfe372604a16a.scope: Deactivated successfully.
Jan 27 08:59:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 4.5 KiB/s wr, 69 op/s
Jan 27 08:59:22 compute-0 podman[260307]: 2026-01-27 08:59:22.263054921 +0000 UTC m=+0.061823634 container create 3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:59:22 compute-0 systemd[1]: Started libpod-conmon-3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4.scope.
Jan 27 08:59:22 compute-0 podman[260307]: 2026-01-27 08:59:22.223148825 +0000 UTC m=+0.021917558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:59:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc6008a2d961540d6873b2873aa42e2cfc4b1d1f0410466c6a62752c12c4b15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc6008a2d961540d6873b2873aa42e2cfc4b1d1f0410466c6a62752c12c4b15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc6008a2d961540d6873b2873aa42e2cfc4b1d1f0410466c6a62752c12c4b15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc6008a2d961540d6873b2873aa42e2cfc4b1d1f0410466c6a62752c12c4b15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:22 compute-0 podman[260307]: 2026-01-27 08:59:22.388612239 +0000 UTC m=+0.187380972 container init 3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:22 compute-0 podman[260307]: 2026-01-27 08:59:22.396099543 +0000 UTC m=+0.194868256 container start 3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 08:59:22 compute-0 podman[260307]: 2026-01-27 08:59:22.409081226 +0000 UTC m=+0.207849939 container attach 3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:59:22 compute-0 podman[260325]: 2026-01-27 08:59:22.466705905 +0000 UTC m=+0.143367283 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:59:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:23 compute-0 focused_jackson[260324]: {
Jan 27 08:59:23 compute-0 focused_jackson[260324]:     "0": [
Jan 27 08:59:23 compute-0 focused_jackson[260324]:         {
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "devices": [
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "/dev/loop3"
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             ],
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "lv_name": "ceph_lv0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "lv_size": "7511998464",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "name": "ceph_lv0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "tags": {
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.cluster_name": "ceph",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.crush_device_class": "",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.encrypted": "0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.osd_id": "0",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.type": "block",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:                 "ceph.vdo": "0"
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             },
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "type": "block",
Jan 27 08:59:23 compute-0 focused_jackson[260324]:             "vg_name": "ceph_vg0"
Jan 27 08:59:23 compute-0 focused_jackson[260324]:         }
Jan 27 08:59:23 compute-0 focused_jackson[260324]:     ]
Jan 27 08:59:23 compute-0 focused_jackson[260324]: }
Jan 27 08:59:23 compute-0 systemd[1]: libpod-3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4.scope: Deactivated successfully.
Jan 27 08:59:23 compute-0 podman[260307]: 2026-01-27 08:59:23.185537515 +0000 UTC m=+0.984306258 container died 3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 08:59:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:23.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:23 compute-0 ceph-mon[74357]: pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 4.5 KiB/s wr, 69 op/s
Jan 27 08:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc6008a2d961540d6873b2873aa42e2cfc4b1d1f0410466c6a62752c12c4b15-merged.mount: Deactivated successfully.
Jan 27 08:59:23 compute-0 podman[260307]: 2026-01-27 08:59:23.87414119 +0000 UTC m=+1.672909913 container remove 3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:59:23 compute-0 sudo[260201]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:23 compute-0 systemd[1]: libpod-conmon-3da587ae117c1dd81a61ac1181600dc6d30694ab9663e52f15f050b75444c5b4.scope: Deactivated successfully.
Jan 27 08:59:23 compute-0 sudo[260365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:24 compute-0 sudo[260365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:24 compute-0 sudo[260365]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:24 compute-0 sudo[260390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 08:59:24 compute-0 sudo[260390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:24 compute-0 sudo[260390]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:24 compute-0 sudo[260415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:24 compute-0 sudo[260415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:24 compute-0 sudo[260415]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.5 KiB/s wr, 53 op/s
Jan 27 08:59:24 compute-0 sudo[260440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 08:59:24 compute-0 sudo[260440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001903324492834543 of space, bias 1.0, pg target 0.5709973478503629 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 08:59:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 08:59:24 compute-0 podman[260505]: 2026-01-27 08:59:24.512063357 +0000 UTC m=+0.048498461 container create 93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 08:59:24 compute-0 systemd[1]: Started libpod-conmon-93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489.scope.
Jan 27 08:59:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:59:24 compute-0 podman[260505]: 2026-01-27 08:59:24.492005601 +0000 UTC m=+0.028440745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:59:24 compute-0 podman[260505]: 2026-01-27 08:59:24.713020628 +0000 UTC m=+0.249455752 container init 93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:59:24 compute-0 podman[260505]: 2026-01-27 08:59:24.720209423 +0000 UTC m=+0.256644517 container start 93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 08:59:24 compute-0 focused_swartz[260521]: 167 167
Jan 27 08:59:24 compute-0 systemd[1]: libpod-93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489.scope: Deactivated successfully.
Jan 27 08:59:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:24.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:24 compute-0 podman[260505]: 2026-01-27 08:59:24.867463092 +0000 UTC m=+0.403903806 container attach 93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:59:24 compute-0 podman[260505]: 2026-01-27 08:59:24.867932045 +0000 UTC m=+0.404367159 container died 93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 08:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c28dfbf523219cbd0d212ba3acc904be7bdddb36817bbf60a08b4686039f4fbc-merged.mount: Deactivated successfully.
Jan 27 08:59:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:25.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:25 compute-0 podman[260505]: 2026-01-27 08:59:25.487725748 +0000 UTC m=+1.024160852 container remove 93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:25 compute-0 systemd[1]: libpod-conmon-93930f13fa49f462d6d8375822e9eb34722177579e2eb0dce7ccef33b1029489.scope: Deactivated successfully.
Jan 27 08:59:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 27 08:59:25 compute-0 podman[260546]: 2026-01-27 08:59:25.624998495 +0000 UTC m=+0.024887859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 08:59:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 27 08:59:25 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 27 08:59:25 compute-0 podman[260546]: 2026-01-27 08:59:25.910574059 +0000 UTC m=+0.310463403 container create 195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:25 compute-0 sudo[260561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:25 compute-0 sudo[260561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:25 compute-0 sudo[260561]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:26 compute-0 ceph-mon[74357]: pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.5 KiB/s wr, 53 op/s
Jan 27 08:59:26 compute-0 sudo[260586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:26 compute-0 sudo[260586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:26 compute-0 sudo[260586]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:26 compute-0 systemd[1]: Started libpod-conmon-195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830.scope.
Jan 27 08:59:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 08:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a0cf0180652cbd3c8ca1aaed39b9320431be08c47c36735c33be6dd923bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a0cf0180652cbd3c8ca1aaed39b9320431be08c47c36735c33be6dd923bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a0cf0180652cbd3c8ca1aaed39b9320431be08c47c36735c33be6dd923bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a0cf0180652cbd3c8ca1aaed39b9320431be08c47c36735c33be6dd923bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Jan 27 08:59:26 compute-0 podman[260546]: 2026-01-27 08:59:26.202402074 +0000 UTC m=+0.602291448 container init 195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 08:59:26 compute-0 podman[260546]: 2026-01-27 08:59:26.209315592 +0000 UTC m=+0.609204936 container start 195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:59:26 compute-0 podman[260546]: 2026-01-27 08:59:26.222642725 +0000 UTC m=+0.622532089 container attach 195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:59:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:26.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]: {
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:         "osd_id": 0,
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:         "type": "bluestore"
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]:     }
Jan 27 08:59:27 compute-0 thirsty_vaughan[260614]: }
Jan 27 08:59:27 compute-0 ceph-mon[74357]: osdmap e154: 3 total, 3 up, 3 in
Jan 27 08:59:27 compute-0 systemd[1]: libpod-195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830.scope: Deactivated successfully.
Jan 27 08:59:27 compute-0 podman[260546]: 2026-01-27 08:59:27.109375165 +0000 UTC m=+1.509264529 container died 195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 08:59:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:27.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b9a0cf0180652cbd3c8ca1aaed39b9320431be08c47c36735c33be6dd923bdc-merged.mount: Deactivated successfully.
Jan 27 08:59:27 compute-0 podman[260546]: 2026-01-27 08:59:27.444694624 +0000 UTC m=+1.844583968 container remove 195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:27 compute-0 sudo[260440]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:27 compute-0 systemd[1]: libpod-conmon-195b4a66b0b41dd06e1db2e4b5a5c5281a4e65b4f979cb7690dbeef74b3fe830.scope: Deactivated successfully.
Jan 27 08:59:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 08:59:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 08:59:27 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:27 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7bb66c5b-0a7e-4169-92ae-f1fed0673a23 does not exist
Jan 27 08:59:27 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 06c42914-e991-4e62-8016-e5d439140d80 does not exist
Jan 27 08:59:27 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1f0d0ee3-03a9-4ff7-ad19-62623fd8163e does not exist
Jan 27 08:59:27 compute-0 sudo[260648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:27 compute-0 sudo[260648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:27 compute-0 sudo[260648]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:27 compute-0 sudo[260673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 08:59:27 compute-0 sudo[260673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:27 compute-0 sudo[260673]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:28 compute-0 ceph-mon[74357]: pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Jan 27 08:59:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:28 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 08:59:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Jan 27 08:59:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:28.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:29.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:29 compute-0 ceph-mon[74357]: pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Jan 27 08:59:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 08:59:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:59:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 8048 writes, 29K keys, 8048 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8048 writes, 1890 syncs, 4.26 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1715 writes, 3702 keys, 1715 commit groups, 1.0 writes per commit group, ingest: 1.94 MB, 0.00 MB/s
                                           Interval WAL: 1715 writes, 725 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 08:59:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 27 08:59:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 27 08:59:30 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 27 08:59:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:30.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:31.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:31 compute-0 ceph-mon[74357]: pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 08:59:31 compute-0 ceph-mon[74357]: osdmap e155: 3 total, 3 up, 3 in
Jan 27 08:59:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Jan 27 08:59:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:34 compute-0 ceph-mon[74357]: pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Jan 27 08:59:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:35.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:35 compute-0 ceph-mon[74357]: pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Check health
Jan 27 08:59:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:37.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:37 compute-0 ceph-mon[74357]: pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:38 compute-0 podman[260704]: 2026-01-27 08:59:38.280681625 +0000 UTC m=+0.086418753 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:59:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:39 compute-0 ceph-mon[74357]: pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:40.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:41.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:41 compute-0 ceph-mon[74357]: pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:42.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:43.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:43 compute-0 ceph-mon[74357]: pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:44 compute-0 nova_compute[247671]: 2026-01-27 08:59:44.675 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 08:59:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:44.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:45.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:46 compute-0 ceph-mon[74357]: pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 08:59:46 compute-0 sudo[260735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:46 compute-0 sudo[260735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:46 compute-0 sudo[260735]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 27 08:59:46 compute-0 sudo[260760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 08:59:46 compute-0 sudo[260760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 08:59:46 compute-0 sudo[260760]: pam_unix(sudo:session): session closed for user root
Jan 27 08:59:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:46.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:47 compute-0 ceph-mon[74357]: pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 27 08:59:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:47.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 27 08:59:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:49.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:49 compute-0 ceph-mon[74357]: pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 27 08:59:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:51.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:51 compute-0 ceph-mon[74357]: pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:53.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:53 compute-0 podman[260788]: 2026-01-27 08:59:53.257873483 +0000 UTC m=+0.061685006 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:59:53 compute-0 ceph-mon[74357]: pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:59:54.241 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 08:59:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:59:54.241 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 08:59:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 08:59:54.241 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 08:59:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:55 compute-0 ceph-mon[74357]: pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:56.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:57 compute-0 ceph-mon[74357]: pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 08:59:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 08:59:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 08:59:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:08:59:58.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 08:59:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 08:59:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 08:59:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:08:59:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 08:59:59 compute-0 ceph-mon[74357]: pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 08:59:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3914481076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 08:59:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3914481076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:00:00 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 09:00:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:00:00 compute-0 nova_compute[247671]: 2026-01-27 09:00:00.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:00 compute-0 nova_compute[247671]: 2026-01-27 09:00:00.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:00:00 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 09:00:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:00.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:01.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:01 compute-0 nova_compute[247671]: 2026-01-27 09:00:01.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:01 compute-0 ceph-mon[74357]: pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:00:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:02 compute-0 nova_compute[247671]: 2026-01-27 09:00:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:02.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:03 compute-0 ceph-mon[74357]: pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:04 compute-0 nova_compute[247671]: 2026-01-27 09:00:04.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:04 compute-0 nova_compute[247671]: 2026-01-27 09:00:04.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:05 compute-0 nova_compute[247671]: 2026-01-27 09:00:05.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:05 compute-0 nova_compute[247671]: 2026-01-27 09:00:05.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:00:05 compute-0 nova_compute[247671]: 2026-01-27 09:00:05.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:00:05 compute-0 nova_compute[247671]: 2026-01-27 09:00:05.523 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:00:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:05 compute-0 ceph-mon[74357]: pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:06 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:00:06.073 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:00:06 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:00:06.074 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:00:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:06 compute-0 sudo[260814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:06 compute-0 sudo[260814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:06 compute-0 sudo[260814]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:06 compute-0 sudo[260839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:06 compute-0 sudo[260839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:06 compute-0 sudo[260839]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:06 compute-0 nova_compute[247671]: 2026-01-27 09:00:06.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1330935896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:06.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:00:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:00:07 compute-0 ceph-mon[74357]: pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2743196980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:08 compute-0 nova_compute[247671]: 2026-01-27 09:00:08.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3573407091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:08.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:09 compute-0 podman[260865]: 2026-01-27 09:00:09.266086847 +0000 UTC m=+0.083944513 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:00:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.454 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.454 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.454 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:00:09 compute-0 ceph-mon[74357]: pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:00:09 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43982394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:09 compute-0 nova_compute[247671]: 2026-01-27 09:00:09.899 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.046 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.047 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5215MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.047 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.047 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:00:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.445 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.446 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:00:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2330469162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/43982394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.749 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing inventories for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.793 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating ProviderTree inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.794 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 09:00:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:10.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.878 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing aggregate associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.915 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing trait associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 09:00:10 compute-0 nova_compute[247671]: 2026-01-27 09:00:10.946 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:00:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:00:11 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106566842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:11 compute-0 nova_compute[247671]: 2026-01-27 09:00:11.352 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:00:11 compute-0 nova_compute[247671]: 2026-01-27 09:00:11.357 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:00:11 compute-0 nova_compute[247671]: 2026-01-27 09:00:11.386 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:00:11 compute-0 nova_compute[247671]: 2026-01-27 09:00:11.387 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:00:11 compute-0 nova_compute[247671]: 2026-01-27 09:00:11.388 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:00:11 compute-0 ceph-mon[74357]: pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/106566842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:00:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/746550693' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:00:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/746550693' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:00:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:12.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:13 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:00:13.076 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:00:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:13 compute-0 ceph-mon[74357]: pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:00:15
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data']
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:00:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:00:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:16 compute-0 ceph-mon[74357]: pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:18 compute-0 ceph-mon[74357]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:19 compute-0 ceph-mon[74357]: pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:21 compute-0 ceph-mon[74357]: pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:23 compute-0 ceph-mon[74357]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:24 compute-0 podman[260943]: 2026-01-27 09:00:24.238707191 +0000 UTC m=+0.048393572 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:00:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:00:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:24.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:25.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:25 compute-0 ceph-mon[74357]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:26 compute-0 sudo[260963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:26 compute-0 sudo[260963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:26 compute-0 sudo[260963]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:26 compute-0 sudo[260988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:26 compute-0 sudo[260988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:26 compute-0 sudo[260988]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:26.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:27 compute-0 ceph-mon[74357]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:00:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:28 compute-0 sudo[261014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:28 compute-0 sudo[261014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:28 compute-0 sudo[261014]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:28 compute-0 sudo[261039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:00:28 compute-0 sudo[261039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:28 compute-0 sudo[261039]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:28 compute-0 sudo[261064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:28 compute-0 sudo[261064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:28 compute-0 sudo[261064]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:28 compute-0 sudo[261089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:00:28 compute-0 sudo[261089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:00:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:00:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 09:00:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:28.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 09:00:28 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:28 compute-0 sudo[261089]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:28 compute-0 sudo[261145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:28 compute-0 sudo[261145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:28 compute-0 sudo[261145]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:29 compute-0 sudo[261170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:00:29 compute-0 sudo[261170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:29 compute-0 sudo[261170]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:29 compute-0 sudo[261195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:29 compute-0 sudo[261195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:29 compute-0 sudo[261195]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:29 compute-0 sudo[261220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- inventory --format=json-pretty --filter-for-batch
Jan 27 09:00:29 compute-0 sudo[261220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.418911885 +0000 UTC m=+0.022598728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.521113736 +0000 UTC m=+0.124800569 container create 494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 09:00:29 compute-0 systemd[1]: Started libpod-conmon-494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197.scope.
Jan 27 09:00:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.598933062 +0000 UTC m=+0.202619905 container init 494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.605534922 +0000 UTC m=+0.209221745 container start 494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.609418909 +0000 UTC m=+0.213105742 container attach 494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 27 09:00:29 compute-0 pensive_golick[261301]: 167 167
Jan 27 09:00:29 compute-0 systemd[1]: libpod-494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197.scope: Deactivated successfully.
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.6113303 +0000 UTC m=+0.215017113 container died 494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2ea99d0e47272e5ac148022d15a137be9ccc119d50bf636fe1c55879668d1a-merged.mount: Deactivated successfully.
Jan 27 09:00:29 compute-0 podman[261285]: 2026-01-27 09:00:29.652769793 +0000 UTC m=+0.256456606 container remove 494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 09:00:29 compute-0 systemd[1]: libpod-conmon-494e42f9636a728c1f96a85a45cbd911fbda376e9201e4dd733f1c8aaed49197.scope: Deactivated successfully.
Jan 27 09:00:29 compute-0 ceph-mon[74357]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:29 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:29 compute-0 podman[261325]: 2026-01-27 09:00:29.814763978 +0000 UTC m=+0.047217131 container create 00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 09:00:29 compute-0 systemd[1]: Started libpod-conmon-00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218.scope.
Jan 27 09:00:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e799a182df6e0bedc5f186fb3544626946c6dcfe69cf76a404e02aaaf8b2ef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e799a182df6e0bedc5f186fb3544626946c6dcfe69cf76a404e02aaaf8b2ef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e799a182df6e0bedc5f186fb3544626946c6dcfe69cf76a404e02aaaf8b2ef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e799a182df6e0bedc5f186fb3544626946c6dcfe69cf76a404e02aaaf8b2ef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:29 compute-0 podman[261325]: 2026-01-27 09:00:29.870435609 +0000 UTC m=+0.102888802 container init 00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:00:29 compute-0 podman[261325]: 2026-01-27 09:00:29.878211631 +0000 UTC m=+0.110664784 container start 00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 09:00:29 compute-0 podman[261325]: 2026-01-27 09:00:29.880582325 +0000 UTC m=+0.113035478 container attach 00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 09:00:29 compute-0 podman[261325]: 2026-01-27 09:00:29.789398065 +0000 UTC m=+0.021851238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:00:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:00:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:30.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]: [
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:     {
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "available": false,
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "ceph_device": false,
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "lsm_data": {},
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "lvs": [],
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "path": "/dev/sr0",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "rejected_reasons": [
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "Has a FileSystem",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "Insufficient space (<5GB)"
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         ],
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         "sys_api": {
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "actuators": null,
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "device_nodes": "sr0",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "devname": "sr0",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "human_readable_size": "482.00 KB",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "id_bus": "ata",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "model": "QEMU DVD-ROM",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "nr_requests": "2",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "parent": "/dev/sr0",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "partitions": {},
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "path": "/dev/sr0",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "removable": "1",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "rev": "2.5+",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "ro": "0",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "rotational": "1",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "sas_address": "",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "sas_device_handle": "",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "scheduler_mode": "mq-deadline",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "sectors": 0,
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "sectorsize": "2048",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "size": 493568.0,
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "support_discard": "2048",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "type": "disk",
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:             "vendor": "QEMU"
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:         }
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]:     }
Jan 27 09:00:30 compute-0 affectionate_shirley[261342]: ]
Jan 27 09:00:30 compute-0 systemd[1]: libpod-00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218.scope: Deactivated successfully.
Jan 27 09:00:30 compute-0 podman[261325]: 2026-01-27 09:00:30.973744787 +0000 UTC m=+1.206197950 container died 00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 09:00:30 compute-0 systemd[1]: libpod-00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218.scope: Consumed 1.086s CPU time.
Jan 27 09:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e799a182df6e0bedc5f186fb3544626946c6dcfe69cf76a404e02aaaf8b2ef0-merged.mount: Deactivated successfully.
Jan 27 09:00:31 compute-0 podman[261325]: 2026-01-27 09:00:31.020795151 +0000 UTC m=+1.253248304 container remove 00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:00:31 compute-0 systemd[1]: libpod-conmon-00130e5a9081babeb6652102601be91aca270b65b3b7a84a82ac9d2e33061218.scope: Deactivated successfully.
Jan 27 09:00:31 compute-0 sudo[261220]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:00:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:00:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 27 09:00:31 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 09:00:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:32 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:32 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 907491f0-15e4-4a8c-a3de-2b7c75acb2df does not exist
Jan 27 09:00:32 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 50585407-32a2-4305-bcf9-8dacb140a19f does not exist
Jan 27 09:00:32 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev ea66b6ca-28af-4811-b468-741054f2b814 does not exist
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:00:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:00:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:32 compute-0 sudo[262431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:32 compute-0 sudo[262431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:32 compute-0 sudo[262431]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:32 compute-0 sudo[262456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:00:32 compute-0 sudo[262456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:32 compute-0 sudo[262456]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:32 compute-0 sudo[262481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:32 compute-0 sudo[262481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:32 compute-0 sudo[262481]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:32 compute-0 sudo[262506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:00:32 compute-0 sudo[262506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.698194851 +0000 UTC m=+0.051341323 container create c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:32 compute-0 systemd[1]: Started libpod-conmon-c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422.scope.
Jan 27 09:00:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.676506928 +0000 UTC m=+0.029653430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.781337173 +0000 UTC m=+0.134483655 container init c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mendeleev, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.788008264 +0000 UTC m=+0.141154736 container start c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mendeleev, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:00:32 compute-0 exciting_mendeleev[262587]: 167 167
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.792365804 +0000 UTC m=+0.145512266 container attach c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mendeleev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:00:32 compute-0 systemd[1]: libpod-c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422.scope: Deactivated successfully.
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.793129155 +0000 UTC m=+0.146275627 container died c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mendeleev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba298b70b4aed9b65540cde521b8c748e99456c7cfda02e4fc1449023e54de40-merged.mount: Deactivated successfully.
Jan 27 09:00:32 compute-0 podman[262570]: 2026-01-27 09:00:32.830395492 +0000 UTC m=+0.183541944 container remove c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mendeleev, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:00:32 compute-0 systemd[1]: libpod-conmon-c73799c3b1b9c3a010ebc404d8c73f2c1c0227cf1b2aec37c876ab8a4130b422.scope: Deactivated successfully.
Jan 27 09:00:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:32.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:33.0056696 +0000 UTC m=+0.042945024 container create f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 09:00:33 compute-0 systemd[1]: Started libpod-conmon-f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81.scope.
Jan 27 09:00:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277a84c012b74278c458378645fea135da1eec0ec043ecb32b17625060dad0ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277a84c012b74278c458378645fea135da1eec0ec043ecb32b17625060dad0ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277a84c012b74278c458378645fea135da1eec0ec043ecb32b17625060dad0ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277a84c012b74278c458378645fea135da1eec0ec043ecb32b17625060dad0ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277a84c012b74278c458378645fea135da1eec0ec043ecb32b17625060dad0ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:32.984239575 +0000 UTC m=+0.021514989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:33.087326091 +0000 UTC m=+0.124601495 container init f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:00:33 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:00:33 compute-0 ceph-mon[74357]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:33.093098318 +0000 UTC m=+0.130373702 container start f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:33.096707906 +0000 UTC m=+0.133983310 container attach f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:00:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:33 compute-0 dazzling_chatelet[262627]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:00:33 compute-0 dazzling_chatelet[262627]: --> relative data size: 1.0
Jan 27 09:00:33 compute-0 dazzling_chatelet[262627]: --> All data devices are unavailable
Jan 27 09:00:33 compute-0 systemd[1]: libpod-f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81.scope: Deactivated successfully.
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:33.863503772 +0000 UTC m=+0.900779156 container died f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-277a84c012b74278c458378645fea135da1eec0ec043ecb32b17625060dad0ad-merged.mount: Deactivated successfully.
Jan 27 09:00:33 compute-0 podman[262611]: 2026-01-27 09:00:33.919970085 +0000 UTC m=+0.957245469 container remove f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:33 compute-0 systemd[1]: libpod-conmon-f7fec4f9778cfa304d9855dadce6dd7858f175112985900ed677ae78bfa72b81.scope: Deactivated successfully.
Jan 27 09:00:33 compute-0 sudo[262506]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:34 compute-0 sudo[262656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:34 compute-0 sudo[262656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:34 compute-0 sudo[262656]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:34 compute-0 sudo[262681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:00:34 compute-0 sudo[262681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:34 compute-0 sudo[262681]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:34 compute-0 sudo[262706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:34 compute-0 sudo[262706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:34 compute-0 sudo[262706]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:34 compute-0 sudo[262731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:00:34 compute-0 sudo[262731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.464289014 +0000 UTC m=+0.034056682 container create 8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:00:34 compute-0 systemd[1]: Started libpod-conmon-8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd.scope.
Jan 27 09:00:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.538173292 +0000 UTC m=+0.107940960 container init 8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.542913282 +0000 UTC m=+0.112680950 container start 8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.450760055 +0000 UTC m=+0.020527743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:34 compute-0 optimistic_mahavira[262813]: 167 167
Jan 27 09:00:34 compute-0 systemd[1]: libpod-8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd.scope: Deactivated successfully.
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.548398392 +0000 UTC m=+0.118166060 container attach 8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.548839244 +0000 UTC m=+0.118606912 container died 8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6081b3536a9c0cee3fe233c8c056fb38ca81b4b93ca5593eb9582f6efaa0c46-merged.mount: Deactivated successfully.
Jan 27 09:00:34 compute-0 podman[262797]: 2026-01-27 09:00:34.592485886 +0000 UTC m=+0.162253554 container remove 8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:00:34 compute-0 systemd[1]: libpod-conmon-8958553428dc2e4211d6dee82531361421883dcb3d40e073730c1bff3f5133cd.scope: Deactivated successfully.
Jan 27 09:00:34 compute-0 podman[262837]: 2026-01-27 09:00:34.736572642 +0000 UTC m=+0.040613431 container create 9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_feistel, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 09:00:34 compute-0 systemd[1]: Started libpod-conmon-9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0.scope.
Jan 27 09:00:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf68cd4b072be1b7ccc1fea959b72fb8de6b9dd11f1afb4dd55464f32caa5438/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf68cd4b072be1b7ccc1fea959b72fb8de6b9dd11f1afb4dd55464f32caa5438/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf68cd4b072be1b7ccc1fea959b72fb8de6b9dd11f1afb4dd55464f32caa5438/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf68cd4b072be1b7ccc1fea959b72fb8de6b9dd11f1afb4dd55464f32caa5438/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:34 compute-0 podman[262837]: 2026-01-27 09:00:34.717422389 +0000 UTC m=+0.021463228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:34 compute-0 podman[262837]: 2026-01-27 09:00:34.819322622 +0000 UTC m=+0.123363461 container init 9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:00:34 compute-0 podman[262837]: 2026-01-27 09:00:34.828477573 +0000 UTC m=+0.132518392 container start 9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:00:34 compute-0 podman[262837]: 2026-01-27 09:00:34.831988348 +0000 UTC m=+0.136029137 container attach 9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:00:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:35 compute-0 ceph-mon[74357]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]: {
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:     "0": [
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:         {
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "devices": [
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "/dev/loop3"
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             ],
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "lv_name": "ceph_lv0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "lv_size": "7511998464",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "name": "ceph_lv0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "tags": {
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.cluster_name": "ceph",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.crush_device_class": "",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.encrypted": "0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.osd_id": "0",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.type": "block",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:                 "ceph.vdo": "0"
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             },
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "type": "block",
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:             "vg_name": "ceph_vg0"
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:         }
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]:     ]
Jan 27 09:00:35 compute-0 beautiful_feistel[262853]: }
Jan 27 09:00:35 compute-0 systemd[1]: libpod-9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0.scope: Deactivated successfully.
Jan 27 09:00:35 compute-0 conmon[262853]: conmon 9f525e1cea97389df21c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0.scope/container/memory.events
Jan 27 09:00:35 compute-0 podman[262837]: 2026-01-27 09:00:35.609069395 +0000 UTC m=+0.913110184 container died 9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_feistel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:00:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf68cd4b072be1b7ccc1fea959b72fb8de6b9dd11f1afb4dd55464f32caa5438-merged.mount: Deactivated successfully.
Jan 27 09:00:35 compute-0 podman[262837]: 2026-01-27 09:00:35.680574299 +0000 UTC m=+0.984615088 container remove 9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_feistel, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:00:35 compute-0 systemd[1]: libpod-conmon-9f525e1cea97389df21cd2665bf9b70d73b7e82deaabea5eedc92d964a6f3dc0.scope: Deactivated successfully.
Jan 27 09:00:35 compute-0 sudo[262731]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:35 compute-0 sudo[262874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:35 compute-0 sudo[262874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:35 compute-0 sudo[262874]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:35 compute-0 sudo[262899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:00:35 compute-0 sudo[262899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:35 compute-0 sudo[262899]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:35 compute-0 sudo[262924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:35 compute-0 sudo[262924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:35 compute-0 sudo[262924]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:35 compute-0 sudo[262949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:00:35 compute-0 sudo[262949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.252945374 +0000 UTC m=+0.037975568 container create 0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 09:00:36 compute-0 systemd[1]: Started libpod-conmon-0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc.scope.
Jan 27 09:00:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.324278353 +0000 UTC m=+0.109308557 container init 0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.330197574 +0000 UTC m=+0.115227768 container start 0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.236496024 +0000 UTC m=+0.021526238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.333483174 +0000 UTC m=+0.118513378 container attach 0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:36 compute-0 thirsty_mcnulty[263029]: 167 167
Jan 27 09:00:36 compute-0 systemd[1]: libpod-0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc.scope: Deactivated successfully.
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.334989104 +0000 UTC m=+0.120019298 container died 0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:00:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd923348d9a330eb9191331869dd9f1d7fef6d45953128ac7d9e477628cf9167-merged.mount: Deactivated successfully.
Jan 27 09:00:36 compute-0 podman[263013]: 2026-01-27 09:00:36.366471135 +0000 UTC m=+0.151501329 container remove 0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:36 compute-0 systemd[1]: libpod-conmon-0f1378adc38f5d53a01b038fdc4ac1fe41f9a16986fc236beb2b12f2724bb7fc.scope: Deactivated successfully.
Jan 27 09:00:36 compute-0 podman[263051]: 2026-01-27 09:00:36.525665633 +0000 UTC m=+0.046135241 container create 8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:00:36 compute-0 systemd[1]: Started libpod-conmon-8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560.scope.
Jan 27 09:00:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5863092f6734dc33eb834eede7831d0f90b019b86c46fead8c87f6a31e7a8f16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5863092f6734dc33eb834eede7831d0f90b019b86c46fead8c87f6a31e7a8f16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 compute-0 podman[263051]: 2026-01-27 09:00:36.503599761 +0000 UTC m=+0.024069389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5863092f6734dc33eb834eede7831d0f90b019b86c46fead8c87f6a31e7a8f16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5863092f6734dc33eb834eede7831d0f90b019b86c46fead8c87f6a31e7a8f16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 compute-0 podman[263051]: 2026-01-27 09:00:36.629116689 +0000 UTC m=+0.149586317 container init 8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:00:36 compute-0 podman[263051]: 2026-01-27 09:00:36.634946409 +0000 UTC m=+0.155416017 container start 8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:36 compute-0 podman[263051]: 2026-01-27 09:00:36.670253073 +0000 UTC m=+0.190722681 container attach 8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_easley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 09:00:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:36.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:37 compute-0 ceph-mon[74357]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:37.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:37 compute-0 agitated_easley[263068]: {
Jan 27 09:00:37 compute-0 agitated_easley[263068]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:00:37 compute-0 agitated_easley[263068]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:00:37 compute-0 agitated_easley[263068]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:00:37 compute-0 agitated_easley[263068]:         "osd_id": 0,
Jan 27 09:00:37 compute-0 agitated_easley[263068]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:00:37 compute-0 agitated_easley[263068]:         "type": "bluestore"
Jan 27 09:00:37 compute-0 agitated_easley[263068]:     }
Jan 27 09:00:37 compute-0 agitated_easley[263068]: }
Jan 27 09:00:37 compute-0 systemd[1]: libpod-8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560.scope: Deactivated successfully.
Jan 27 09:00:37 compute-0 podman[263089]: 2026-01-27 09:00:37.554911729 +0000 UTC m=+0.027950745 container died 8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5863092f6734dc33eb834eede7831d0f90b019b86c46fead8c87f6a31e7a8f16-merged.mount: Deactivated successfully.
Jan 27 09:00:37 compute-0 podman[263089]: 2026-01-27 09:00:37.615412081 +0000 UTC m=+0.088451107 container remove 8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:37 compute-0 systemd[1]: libpod-conmon-8782344e1dbb359b4f48752597975a0de7bd1faa60ab54b353d9deaefb094560.scope: Deactivated successfully.
Jan 27 09:00:37 compute-0 sudo[262949]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:00:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:00:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8460b5a7-a223-4515-9e7b-b5d558650056 does not exist
Jan 27 09:00:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 71cc6546-0b04-47ec-ad50-7a542bb664de does not exist
Jan 27 09:00:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 83faaecd-c729-44e1-a80b-b9b13bc7869c does not exist
Jan 27 09:00:37 compute-0 sudo[263105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:37 compute-0 sudo[263105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:37 compute-0 sudo[263105]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:37 compute-0 sudo[263130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:00:37 compute-0 sudo[263130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:37 compute-0 sudo[263130]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:38 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:00:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:39.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:40 compute-0 ceph-mon[74357]: pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:40 compute-0 podman[263156]: 2026-01-27 09:00:40.273060268 +0000 UTC m=+0.085351501 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:00:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:40.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:41 compute-0 ceph-mon[74357]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:41.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:42.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:43 compute-0 ceph-mon[74357]: pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:45 compute-0 ceph-mon[74357]: pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:46 compute-0 sudo[263185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:46 compute-0 sudo[263185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:46 compute-0 sudo[263185]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:46 compute-0 sudo[263210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:00:46 compute-0 sudo[263210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:00:46 compute-0 sudo[263210]: pam_unix(sudo:session): session closed for user root
Jan 27 09:00:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:47 compute-0 ceph-mon[74357]: pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:48.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:49 compute-0 ceph-mon[74357]: pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:50.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:51 compute-0 ceph-mon[74357]: pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:53 compute-0 ceph-mon[74357]: pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:00:54.242 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:00:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:00:54.242 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:00:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:00:54.242 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:00:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:54.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:55 compute-0 podman[263239]: 2026-01-27 09:00:55.237229592 +0000 UTC m=+0.047178860 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:55.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:55 compute-0 ceph-mon[74357]: pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:56.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:00:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:00:57 compute-0 ceph-mon[74357]: pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:00:58.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:00:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3020176102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:00:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:00:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3020176102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:00:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:00:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:00:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:00:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:00:59 compute-0 ceph-mon[74357]: pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:00:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3020176102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:00:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3020176102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:01:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:01:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.003000081s ======
Jan 27 09:01:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Jan 27 09:01:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:01.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:01 compute-0 CROND[263263]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 09:01:01 compute-0 run-parts[263266]: (/etc/cron.hourly) starting 0anacron
Jan 27 09:01:01 compute-0 run-parts[263272]: (/etc/cron.hourly) finished 0anacron
Jan 27 09:01:01 compute-0 CROND[263262]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 09:01:01 compute-0 ceph-mon[74357]: pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:01:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 09:01:02 compute-0 nova_compute[247671]: 2026-01-27 09:01:02.390 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:02 compute-0 nova_compute[247671]: 2026-01-27 09:01:02.390 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:01:02 compute-0 nova_compute[247671]: 2026-01-27 09:01:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:02.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:03 compute-0 nova_compute[247671]: 2026-01-27 09:01:03.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:03 compute-0 ceph-mon[74357]: pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 09:01:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 09:01:04 compute-0 nova_compute[247671]: 2026-01-27 09:01:04.371 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:04 compute-0 nova_compute[247671]: 2026-01-27 09:01:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:04.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:05 compute-0 nova_compute[247671]: 2026-01-27 09:01:05.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:05 compute-0 nova_compute[247671]: 2026-01-27 09:01:05.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:05 compute-0 nova_compute[247671]: 2026-01-27 09:01:05.421 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:01:05 compute-0 nova_compute[247671]: 2026-01-27 09:01:05.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:01:05 compute-0 nova_compute[247671]: 2026-01-27 09:01:05.439 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:01:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:05 compute-0 ceph-mon[74357]: pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 09:01:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:01:06 compute-0 sudo[263276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:06 compute-0 sudo[263276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:06 compute-0 sudo[263276]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:06 compute-0 sudo[263301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:06 compute-0 sudo[263301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:06 compute-0 sudo[263301]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:06 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3207958103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:07 compute-0 ceph-mon[74357]: pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:01:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/561815719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:01:08 compute-0 nova_compute[247671]: 2026-01-27 09:01:08.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:10 compute-0 ceph-mon[74357]: pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:01:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:01:10 compute-0 nova_compute[247671]: 2026-01-27 09:01:10.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:10 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:01:10.652 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:01:10 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:01:10.652 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:01:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1403152240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/792044205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:11 compute-0 podman[263328]: 2026-01-27 09:01:11.277000018 +0000 UTC m=+0.094347368 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 09:01:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:11.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.456 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.456 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.456 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.457 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.457 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:01:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:01:11 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870912134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:11 compute-0 nova_compute[247671]: 2026-01-27 09:01:11.929 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.067 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.068 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.069 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.069 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:01:12 compute-0 ceph-mon[74357]: pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:01:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1870912134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.146 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.147 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.165 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:01:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 27 09:01:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:01:12 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075864530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.613 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.622 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.643 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.645 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:01:12 compute-0 nova_compute[247671]: 2026-01-27 09:01:12.646 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:01:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:12.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:13 compute-0 ceph-mon[74357]: pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 27 09:01:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2075864530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:01:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Jan 27 09:01:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:14.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:01:15
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'images']
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:01:15 compute-0 ceph-mon[74357]: pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:01:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:01:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:15.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:15 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:01:15.654 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:01:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Jan 27 09:01:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:16.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:17 compute-0 ceph-mon[74357]: pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Jan 27 09:01:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:17.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:19 compute-0 ceph-mon[74357]: pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:21 compute-0 ceph-mon[74357]: pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:21.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:23 compute-0 ceph-mon[74357]: pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:23.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:01:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:01:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:25.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:25 compute-0 ceph-mon[74357]: pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:26 compute-0 podman[263407]: 2026-01-27 09:01:26.301552903 +0000 UTC m=+0.108124795 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:01:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:26 compute-0 sudo[263427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:26 compute-0 sudo[263427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:26 compute-0 sudo[263427]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:27 compute-0 sudo[263452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:27 compute-0 sudo[263452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:27 compute-0 sudo[263452]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:27.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:27 compute-0 ceph-mon[74357]: pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:28.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:29.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:29 compute-0 ceph-mon[74357]: pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:30.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:31 compute-0 ceph-mon[74357]: pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:33 compute-0 ceph-mon[74357]: pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:34.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:35 compute-0 ceph-mon[74357]: pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:36.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:37 compute-0 ceph-mon[74357]: pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:38 compute-0 sudo[263483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:38 compute-0 sudo[263483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:38 compute-0 sudo[263483]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:38 compute-0 sudo[263508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:01:38 compute-0 sudo[263508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:38 compute-0 sudo[263508]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:38 compute-0 sudo[263533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:38 compute-0 sudo[263533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:38 compute-0 sudo[263533]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:38 compute-0 sudo[263558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:01:38 compute-0 sudo[263558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:38 compute-0 sudo[263558]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:38.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:39.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:39 compute-0 ceph-mon[74357]: pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 87419bca-823b-480e-84e3-97b3f4558699 does not exist
Jan 27 09:01:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3f820297-bea0-4b06-83b1-4ff78ef988db does not exist
Jan 27 09:01:41 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 2e0f317a-b336-4fb4-9902-1eb3d625e3fc does not exist
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:01:41 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:01:41 compute-0 sudo[263614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:41 compute-0 sudo[263614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:41 compute-0 sudo[263614]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:41.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:41 compute-0 sudo[263645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:01:41 compute-0 sudo[263645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:41 compute-0 sudo[263645]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:41 compute-0 podman[263638]: 2026-01-27 09:01:41.522993144 +0000 UTC m=+0.113902432 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 09:01:41 compute-0 sudo[263687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:41 compute-0 sudo[263687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:41 compute-0 sudo[263687]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:41 compute-0 sudo[263715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:01:41 compute-0 sudo[263715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:41 compute-0 ceph-mon[74357]: pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:01:41 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.040621354 +0000 UTC m=+0.073007855 container create d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:01:42 compute-0 systemd[1]: Started libpod-conmon-d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00.scope.
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.009233007 +0000 UTC m=+0.041619528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:01:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.139818844 +0000 UTC m=+0.172205365 container init d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pike, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.149317043 +0000 UTC m=+0.181703544 container start d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:42 compute-0 adoring_pike[263798]: 167 167
Jan 27 09:01:42 compute-0 systemd[1]: libpod-d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00.scope: Deactivated successfully.
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.156773497 +0000 UTC m=+0.189160018 container attach d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pike, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.157483686 +0000 UTC m=+0.189870187 container died d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pike, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-148aaf9bea8a70c80e2501e1b23b74d47e4ca808f7c9ed72c15776d20cdff6e1-merged.mount: Deactivated successfully.
Jan 27 09:01:42 compute-0 podman[263780]: 2026-01-27 09:01:42.202246159 +0000 UTC m=+0.234632650 container remove d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:01:42 compute-0 systemd[1]: libpod-conmon-d2cfa4c862e37edfcc625dc100bba5ecf8c71b0094aba36e4c4c7c1e540def00.scope: Deactivated successfully.
Jan 27 09:01:42 compute-0 podman[263822]: 2026-01-27 09:01:42.372219932 +0000 UTC m=+0.041359720 container create 782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:42 compute-0 systemd[1]: Started libpod-conmon-782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360.scope.
Jan 27 09:01:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df24454576d947d5795edd43cd30738b883b4fe26b3978de96cbf2c54f28692/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df24454576d947d5795edd43cd30738b883b4fe26b3978de96cbf2c54f28692/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df24454576d947d5795edd43cd30738b883b4fe26b3978de96cbf2c54f28692/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df24454576d947d5795edd43cd30738b883b4fe26b3978de96cbf2c54f28692/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df24454576d947d5795edd43cd30738b883b4fe26b3978de96cbf2c54f28692/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:42 compute-0 podman[263822]: 2026-01-27 09:01:42.35126763 +0000 UTC m=+0.020407438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:01:42 compute-0 podman[263822]: 2026-01-27 09:01:42.455095706 +0000 UTC m=+0.124235594 container init 782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:01:42 compute-0 podman[263822]: 2026-01-27 09:01:42.462305453 +0000 UTC m=+0.131445261 container start 782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:01:42 compute-0 podman[263822]: 2026-01-27 09:01:42.466775195 +0000 UTC m=+0.135915083 container attach 782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 09:01:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:43 compute-0 serene_engelbart[263839]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:01:43 compute-0 serene_engelbart[263839]: --> relative data size: 1.0
Jan 27 09:01:43 compute-0 serene_engelbart[263839]: --> All data devices are unavailable
Jan 27 09:01:43 compute-0 systemd[1]: libpod-782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360.scope: Deactivated successfully.
Jan 27 09:01:43 compute-0 podman[263854]: 2026-01-27 09:01:43.312674051 +0000 UTC m=+0.029706192 container died 782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_engelbart, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:01:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df24454576d947d5795edd43cd30738b883b4fe26b3978de96cbf2c54f28692-merged.mount: Deactivated successfully.
Jan 27 09:01:43 compute-0 podman[263854]: 2026-01-27 09:01:43.359296175 +0000 UTC m=+0.076328316 container remove 782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:01:43 compute-0 systemd[1]: libpod-conmon-782f1afcd9dcca555a65064d088782bca4d6937575b6413b8365b277e898d360.scope: Deactivated successfully.
Jan 27 09:01:43 compute-0 sudo[263715]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:43 compute-0 sudo[263871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:43 compute-0 sudo[263871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:43 compute-0 sudo[263871]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:43 compute-0 sudo[263896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:01:43 compute-0 sudo[263896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:43 compute-0 sudo[263896]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:43 compute-0 sudo[263921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:43 compute-0 sudo[263921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:43 compute-0 sudo[263921]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:43 compute-0 sudo[263946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:01:43 compute-0 sudo[263946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:43 compute-0 ceph-mon[74357]: pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:43 compute-0 podman[264011]: 2026-01-27 09:01:43.951930423 +0000 UTC m=+0.043839188 container create 7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 09:01:43 compute-0 systemd[1]: Started libpod-conmon-7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc.scope.
Jan 27 09:01:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:01:44 compute-0 podman[264011]: 2026-01-27 09:01:44.017225567 +0000 UTC m=+0.109134302 container init 7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 09:01:44 compute-0 podman[264011]: 2026-01-27 09:01:44.022287766 +0000 UTC m=+0.114196491 container start 7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:01:44 compute-0 podman[264011]: 2026-01-27 09:01:43.929708626 +0000 UTC m=+0.021617391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:01:44 compute-0 podman[264011]: 2026-01-27 09:01:44.024800624 +0000 UTC m=+0.116709349 container attach 7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:01:44 compute-0 focused_goldwasser[264027]: 167 167
Jan 27 09:01:44 compute-0 systemd[1]: libpod-7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc.scope: Deactivated successfully.
Jan 27 09:01:44 compute-0 podman[264011]: 2026-01-27 09:01:44.027705633 +0000 UTC m=+0.119614358 container died 7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 09:01:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd42d054ba999c35d5ee072700579cdae4a956b7f9dfdb631a250098c7117af-merged.mount: Deactivated successfully.
Jan 27 09:01:44 compute-0 podman[264011]: 2026-01-27 09:01:44.062771611 +0000 UTC m=+0.154680336 container remove 7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 09:01:44 compute-0 systemd[1]: libpod-conmon-7bd9a7fdf6ff5c6f99daf53b3da9b64874db2b6b0aa08d90e2532cb749807acc.scope: Deactivated successfully.
Jan 27 09:01:44 compute-0 podman[264053]: 2026-01-27 09:01:44.230976656 +0000 UTC m=+0.038661117 container create 795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:01:44 compute-0 systemd[1]: Started libpod-conmon-795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38.scope.
Jan 27 09:01:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ae3040fa0c8f6aa9b077d102835c94680005d5da26ea76664b8e0749a3185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ae3040fa0c8f6aa9b077d102835c94680005d5da26ea76664b8e0749a3185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ae3040fa0c8f6aa9b077d102835c94680005d5da26ea76664b8e0749a3185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ae3040fa0c8f6aa9b077d102835c94680005d5da26ea76664b8e0749a3185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 compute-0 podman[264053]: 2026-01-27 09:01:44.299761825 +0000 UTC m=+0.107446286 container init 795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:01:44 compute-0 podman[264053]: 2026-01-27 09:01:44.212073949 +0000 UTC m=+0.019758430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:01:44 compute-0 podman[264053]: 2026-01-27 09:01:44.313231552 +0000 UTC m=+0.120916013 container start 795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:01:44 compute-0 podman[264053]: 2026-01-27 09:01:44.316449001 +0000 UTC m=+0.124133512 container attach 795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:01:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]: {
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:     "0": [
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:         {
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "devices": [
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "/dev/loop3"
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             ],
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "lv_name": "ceph_lv0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "lv_size": "7511998464",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "name": "ceph_lv0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "tags": {
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.cluster_name": "ceph",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.crush_device_class": "",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.encrypted": "0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.osd_id": "0",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.type": "block",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:                 "ceph.vdo": "0"
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             },
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "type": "block",
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:             "vg_name": "ceph_vg0"
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:         }
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]:     ]
Jan 27 09:01:45 compute-0 pedantic_sutherland[264069]: }
Jan 27 09:01:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:45 compute-0 systemd[1]: libpod-795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38.scope: Deactivated successfully.
Jan 27 09:01:45 compute-0 podman[264053]: 2026-01-27 09:01:45.0980131 +0000 UTC m=+0.905697571 container died 795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e0ae3040fa0c8f6aa9b077d102835c94680005d5da26ea76664b8e0749a3185-merged.mount: Deactivated successfully.
Jan 27 09:01:45 compute-0 podman[264053]: 2026-01-27 09:01:45.155305965 +0000 UTC m=+0.962990426 container remove 795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:01:45 compute-0 systemd[1]: libpod-conmon-795f2c125988082c05051e5640e5e6602282c2f9287fcd59505909ba9eba6b38.scope: Deactivated successfully.
Jan 27 09:01:45 compute-0 sudo[263946]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:45 compute-0 sudo[264089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:45 compute-0 sudo[264089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:45 compute-0 sudo[264089]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:45 compute-0 sudo[264114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:01:45 compute-0 sudo[264114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:45 compute-0 sudo[264114]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:45 compute-0 sudo[264139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:45 compute-0 sudo[264139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:45 compute-0 sudo[264139]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:45 compute-0 sudo[264164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:01:45 compute-0 sudo[264164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.770576182 +0000 UTC m=+0.051848068 container create c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cartwright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:01:45 compute-0 ceph-mon[74357]: pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:45 compute-0 systemd[1]: Started libpod-conmon-c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3.scope.
Jan 27 09:01:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.748801557 +0000 UTC m=+0.030073493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.851545693 +0000 UTC m=+0.132817599 container init c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.858958316 +0000 UTC m=+0.140230202 container start c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.862097251 +0000 UTC m=+0.143369137 container attach c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cartwright, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 09:01:45 compute-0 cool_cartwright[264246]: 167 167
Jan 27 09:01:45 compute-0 systemd[1]: libpod-c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3.scope: Deactivated successfully.
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.864269361 +0000 UTC m=+0.145541267 container died c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 09:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cdd8244208df84c6e2cb7888fb992f2edd3e2844d8772f2ad69eca713ab05ba-merged.mount: Deactivated successfully.
Jan 27 09:01:45 compute-0 podman[264230]: 2026-01-27 09:01:45.899604417 +0000 UTC m=+0.180876293 container remove c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:01:45 compute-0 systemd[1]: libpod-conmon-c4bcd177fe02f3f523302175c3275d0ef86f48d718787d1641165ebf83fdfad3.scope: Deactivated successfully.
Jan 27 09:01:46 compute-0 podman[264270]: 2026-01-27 09:01:46.057412927 +0000 UTC m=+0.042844551 container create 508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:01:46 compute-0 systemd[1]: Started libpod-conmon-508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675.scope.
Jan 27 09:01:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:01:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fdd3cffb4b3942e570d619de9d2835813a81042393a81f8b2b0e57cae53341/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fdd3cffb4b3942e570d619de9d2835813a81042393a81f8b2b0e57cae53341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fdd3cffb4b3942e570d619de9d2835813a81042393a81f8b2b0e57cae53341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fdd3cffb4b3942e570d619de9d2835813a81042393a81f8b2b0e57cae53341/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:46 compute-0 podman[264270]: 2026-01-27 09:01:46.037906644 +0000 UTC m=+0.023338288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:01:46 compute-0 podman[264270]: 2026-01-27 09:01:46.135573652 +0000 UTC m=+0.121005296 container init 508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 09:01:46 compute-0 podman[264270]: 2026-01-27 09:01:46.141386101 +0000 UTC m=+0.126817715 container start 508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elbakyan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:01:46 compute-0 podman[264270]: 2026-01-27 09:01:46.158113588 +0000 UTC m=+0.143545232 container attach 508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:01:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]: {
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:         "osd_id": 0,
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:         "type": "bluestore"
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]:     }
Jan 27 09:01:46 compute-0 eager_elbakyan[264289]: }
Jan 27 09:01:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:46 compute-0 systemd[1]: libpod-508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675.scope: Deactivated successfully.
Jan 27 09:01:46 compute-0 podman[264310]: 2026-01-27 09:01:46.986054394 +0000 UTC m=+0.026143725 container died 508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-09fdd3cffb4b3942e570d619de9d2835813a81042393a81f8b2b0e57cae53341-merged.mount: Deactivated successfully.
Jan 27 09:01:47 compute-0 podman[264310]: 2026-01-27 09:01:47.035844664 +0000 UTC m=+0.075933975 container remove 508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:01:47 compute-0 systemd[1]: libpod-conmon-508f125a593de381892c854d8d2ccb53316e102a1296ae1ac15e26bb4329e675.scope: Deactivated successfully.
Jan 27 09:01:47 compute-0 sudo[264164]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:01:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:01:47 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:47 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1e85668e-1af8-46f2-b0bc-a6014cd575f8 does not exist
Jan 27 09:01:47 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 707ecd9d-eee4-4d86-8a0a-60314b73cb6f does not exist
Jan 27 09:01:47 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 64062e8a-a420-4710-a0d9-bc5f1dfc53fb does not exist
Jan 27 09:01:47 compute-0 sudo[264325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:47 compute-0 sudo[264325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:47 compute-0 sudo[264325]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:47 compute-0 sudo[264349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:47 compute-0 sudo[264349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:47 compute-0 sudo[264349]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:47 compute-0 sudo[264359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:01:47 compute-0 sudo[264359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:47 compute-0 sudo[264359]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:47 compute-0 sudo[264399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:01:47 compute-0 sudo[264399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:01:47 compute-0 sudo[264399]: pam_unix(sudo:session): session closed for user root
Jan 27 09:01:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:47 compute-0 ceph-mon[74357]: pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:47 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:01:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:49 compute-0 ceph-mon[74357]: pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:01:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 83 B/s wr, 6 op/s
Jan 27 09:01:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:50.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:51.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:51 compute-0 ceph-mon[74357]: pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 83 B/s wr, 6 op/s
Jan 27 09:01:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:01:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:52.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:53.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:53 compute-0 ceph-mon[74357]: pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:01:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:01:54.243 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:01:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:01:54.244 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:01:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:01:54.244 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:01:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:01:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:01:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:55.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:01:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:55 compute-0 ceph-mon[74357]: pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:01:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 84 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 27 09:01:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:57 compute-0 podman[264430]: 2026-01-27 09:01:57.251928839 +0000 UTC m=+0.065830359 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Jan 27 09:01:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:57.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:57 compute-0 ceph-mon[74357]: pgmap v1157: 305 pgs: 305 active+clean; 84 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 27 09:01:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 84 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 27 09:01:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:01:58.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:01:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:01:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:01:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:01:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:00 compute-0 ceph-mon[74357]: pgmap v1158: 305 pgs: 305 active+clean; 84 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 27 09:02:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/859472631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:02:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/859472631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:02:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 88 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:02:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:00.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:01.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:02 compute-0 ceph-mon[74357]: pgmap v1159: 305 pgs: 305 active+clean; 88 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:02:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3110838028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:02:02 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3110838028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:02:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 88 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 400 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 27 09:02:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:03.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:03 compute-0 nova_compute[247671]: 2026-01-27 09:02:03.647 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:03 compute-0 nova_compute[247671]: 2026-01-27 09:02:03.647 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:03 compute-0 nova_compute[247671]: 2026-01-27 09:02:03.647 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:02:04 compute-0 ceph-mon[74357]: pgmap v1160: 305 pgs: 305 active+clean; 88 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 400 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 27 09:02:04 compute-0 nova_compute[247671]: 2026-01-27 09:02:04.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:04 compute-0 nova_compute[247671]: 2026-01-27 09:02:04.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 88 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:02:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:04.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:05 compute-0 ceph-mon[74357]: pgmap v1161: 305 pgs: 305 active+clean; 88 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:02:05 compute-0 nova_compute[247671]: 2026-01-27 09:02:05.419 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:06 compute-0 nova_compute[247671]: 2026-01-27 09:02:06.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:06 compute-0 nova_compute[247671]: 2026-01-27 09:02:06.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:02:06 compute-0 nova_compute[247671]: 2026-01-27 09:02:06.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:02:06 compute-0 nova_compute[247671]: 2026-01-27 09:02:06.437 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:02:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 27 09:02:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:06.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:07 compute-0 sudo[264456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:07 compute-0 sudo[264456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:07 compute-0 sudo[264456]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:07 compute-0 sudo[264481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:07 compute-0 sudo[264481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:07 compute-0 sudo[264481]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:07 compute-0 ceph-mon[74357]: pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 27 09:02:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 44 KiB/s wr, 21 op/s
Jan 27 09:02:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1729745837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:08.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:09 compute-0 nova_compute[247671]: 2026-01-27 09:02:09.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:09.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:09 compute-0 ceph-mon[74357]: pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 44 KiB/s wr, 21 op/s
Jan 27 09:02:09 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/874537876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 44 KiB/s wr, 21 op/s
Jan 27 09:02:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:10.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:11 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:02:11.436 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:02:11 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:02:11.437 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:02:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:02:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:11.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:02:11 compute-0 ceph-mon[74357]: pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 44 KiB/s wr, 21 op/s
Jan 27 09:02:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3221712594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4142556856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:12 compute-0 podman[264509]: 2026-01-27 09:02:12.350907195 +0000 UTC m=+0.164568567 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.486 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.487 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.487 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.487 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.488 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:02:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:02:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:02:12 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547493985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:12 compute-0 nova_compute[247671]: 2026-01-27 09:02:12.952 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:02:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:12.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.097 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.099 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5198MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.099 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.099 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.153 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.153 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.168 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:02:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1547493985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:13.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:02:13 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129191370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.821 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.826 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.886 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.887 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:02:13 compute-0 nova_compute[247671]: 2026-01-27 09:02:13.888 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:02:14 compute-0 ceph-mon[74357]: pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:02:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3129191370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:02:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:02:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:02:15
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta']
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:02:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:02:15 compute-0 ceph-mon[74357]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:02:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:15.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:02:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:16.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:17.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:17 compute-0 ceph-mon[74357]: pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:02:18 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:02:18.439 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:02:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:18.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:19.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:19 compute-0 ceph-mon[74357]: pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:20.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:21.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:22 compute-0 ceph-mon[74357]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:22.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:23.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:24 compute-0 ceph-mon[74357]: pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.045273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504544045320, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2155, "num_deletes": 254, "total_data_size": 4014264, "memory_usage": 4078976, "flush_reason": "Manual Compaction"}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504544069969, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3874904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24171, "largest_seqno": 26325, "table_properties": {"data_size": 3865156, "index_size": 6178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19996, "raw_average_key_size": 20, "raw_value_size": 3845658, "raw_average_value_size": 3932, "num_data_blocks": 274, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504341, "oldest_key_time": 1769504341, "file_creation_time": 1769504544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 24758 microseconds, and 10520 cpu microseconds.
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.070026) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3874904 bytes OK
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.070055) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.072056) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.072075) EVENT_LOG_v1 {"time_micros": 1769504544072069, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.072093) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4005501, prev total WAL file size 4005501, number of live WAL files 2.
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.073648) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3784KB)], [56(9160KB)]
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504544073683, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 13255093, "oldest_snapshot_seqno": -1}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5303 keys, 11266956 bytes, temperature: kUnknown
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504544147404, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 11266956, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11228494, "index_size": 24080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 132501, "raw_average_key_size": 24, "raw_value_size": 11129635, "raw_average_value_size": 2098, "num_data_blocks": 996, "num_entries": 5303, "num_filter_entries": 5303, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.147828) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11266956 bytes
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.149586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.4 rd, 152.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.9 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5830, records dropped: 527 output_compression: NoCompression
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.149618) EVENT_LOG_v1 {"time_micros": 1769504544149604, "job": 30, "event": "compaction_finished", "compaction_time_micros": 73898, "compaction_time_cpu_micros": 32583, "output_level": 6, "num_output_files": 1, "total_output_size": 11266956, "num_input_records": 5830, "num_output_records": 5303, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504544151431, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504544155283, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.073562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.155411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.155418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.155421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.155424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:02:24 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:02:24.155428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:02:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:24.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:26 compute-0 ceph-mon[74357]: pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:26.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:27 compute-0 sudo[264587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:27 compute-0 sudo[264587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:27 compute-0 sudo[264587]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:27 compute-0 sudo[264614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:27 compute-0 sudo[264614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:27 compute-0 sudo[264614]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:27 compute-0 podman[264611]: 2026-01-27 09:02:27.566135436 +0000 UTC m=+0.074841116 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 27 09:02:28 compute-0 ceph-mon[74357]: pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:28.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:29 compute-0 ceph-mon[74357]: pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:29.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:30.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:31.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:31 compute-0 ceph-mon[74357]: pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:33.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:33 compute-0 ceph-mon[74357]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:34.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:35.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:35 compute-0 ceph-mon[74357]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:37 compute-0 ceph-mon[74357]: pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:39.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:39 compute-0 ceph-mon[74357]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:41.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:42 compute-0 ceph-mon[74357]: pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:43.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:43 compute-0 podman[264662]: 2026-01-27 09:02:43.273611715 +0000 UTC m=+0.085442794 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:02:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:43.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:43 compute-0 ceph-mon[74357]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:45.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:46 compute-0 ceph-mon[74357]: pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:47.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:47.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:47 compute-0 ceph-mon[74357]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:47 compute-0 sudo[264690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:47 compute-0 sudo[264690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:47 compute-0 sudo[264694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:47 compute-0 sudo[264690]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:47 compute-0 sudo[264694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:47 compute-0 sudo[264694]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:47 compute-0 sudo[264740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:47 compute-0 sudo[264740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:47 compute-0 sudo[264740]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:47 compute-0 sudo[264741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:02:47 compute-0 sudo[264741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:47 compute-0 sudo[264741]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:47 compute-0 sudo[264790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:47 compute-0 sudo[264790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:47 compute-0 sudo[264790]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:47 compute-0 sudo[264815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:02:47 compute-0 sudo[264815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:48 compute-0 sudo[264815]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:02:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:02:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:02:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:02:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c6c0fe38-9e9d-4a8b-a649-4306c7ceff6b does not exist
Jan 27 09:02:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c49ad68f-79cf-4fe1-a94b-2cf21a79f9bd does not exist
Jan 27 09:02:48 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 79ec908c-e2c8-4239-bbce-f042a5031848 does not exist
Jan 27 09:02:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:02:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:02:48 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:02:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:48 compute-0 sudo[264872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:48 compute-0 sudo[264872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:48 compute-0 sudo[264872]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:48 compute-0 sudo[264897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:02:48 compute-0 sudo[264897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:48 compute-0 sudo[264897]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:48 compute-0 sudo[264922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:48 compute-0 sudo[264922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:48 compute-0 sudo[264922]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:48 compute-0 sudo[264947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:02:48 compute-0 sudo[264947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:02:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:02:48 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:02:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:49.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:49 compute-0 podman[265013]: 2026-01-27 09:02:49.06962022 +0000 UTC m=+0.023089701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:02:49 compute-0 podman[265013]: 2026-01-27 09:02:49.405871936 +0000 UTC m=+0.359341427 container create 481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:02:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:49 compute-0 systemd[1]: Started libpod-conmon-481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714.scope.
Jan 27 09:02:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:02:49 compute-0 podman[265013]: 2026-01-27 09:02:49.849249187 +0000 UTC m=+0.802718678 container init 481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:02:49 compute-0 podman[265013]: 2026-01-27 09:02:49.858401737 +0000 UTC m=+0.811871198 container start 481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:02:49 compute-0 priceless_mahavira[265029]: 167 167
Jan 27 09:02:49 compute-0 systemd[1]: libpod-481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714.scope: Deactivated successfully.
Jan 27 09:02:50 compute-0 podman[265013]: 2026-01-27 09:02:50.009555476 +0000 UTC m=+0.963024947 container attach 481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:02:50 compute-0 podman[265013]: 2026-01-27 09:02:50.011377766 +0000 UTC m=+0.964847247 container died 481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mahavira, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:50 compute-0 ceph-mon[74357]: pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-35c7978cd3be1c8ab24eb6996c4507187f5e7084c61ae22800557fbf82b4cbe2-merged.mount: Deactivated successfully.
Jan 27 09:02:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:50 compute-0 podman[265013]: 2026-01-27 09:02:50.935519059 +0000 UTC m=+1.888988520 container remove 481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mahavira, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:02:50 compute-0 systemd[1]: libpod-conmon-481924351dbe46c537e2dbd2a691dd35c8e5aeec09a1acfcc9b97ebc66a07714.scope: Deactivated successfully.
Jan 27 09:02:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:51.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:51 compute-0 podman[265054]: 2026-01-27 09:02:51.071538225 +0000 UTC m=+0.023380610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:02:51 compute-0 podman[265054]: 2026-01-27 09:02:51.190876454 +0000 UTC m=+0.142718829 container create 5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 09:02:51 compute-0 ceph-mon[74357]: pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:51 compute-0 systemd[1]: Started libpod-conmon-5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190.scope.
Jan 27 09:02:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df703d7b8f2557285f0db4b1b7d0c2fa087b5130e643bec85e14abaf7d7791ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df703d7b8f2557285f0db4b1b7d0c2fa087b5130e643bec85e14abaf7d7791ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df703d7b8f2557285f0db4b1b7d0c2fa087b5130e643bec85e14abaf7d7791ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df703d7b8f2557285f0db4b1b7d0c2fa087b5130e643bec85e14abaf7d7791ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df703d7b8f2557285f0db4b1b7d0c2fa087b5130e643bec85e14abaf7d7791ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:51 compute-0 podman[265054]: 2026-01-27 09:02:51.300430988 +0000 UTC m=+0.252273363 container init 5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_keller, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:02:51 compute-0 podman[265054]: 2026-01-27 09:02:51.311330344 +0000 UTC m=+0.263172719 container start 5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:02:51 compute-0 podman[265054]: 2026-01-27 09:02:51.314525323 +0000 UTC m=+0.266367688 container attach 5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_keller, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 09:02:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:02:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:51.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:02:52 compute-0 cranky_keller[265071]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:02:52 compute-0 cranky_keller[265071]: --> relative data size: 1.0
Jan 27 09:02:52 compute-0 cranky_keller[265071]: --> All data devices are unavailable
Jan 27 09:02:52 compute-0 systemd[1]: libpod-5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190.scope: Deactivated successfully.
Jan 27 09:02:52 compute-0 podman[265054]: 2026-01-27 09:02:52.060801658 +0000 UTC m=+1.012644013 container died 5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-df703d7b8f2557285f0db4b1b7d0c2fa087b5130e643bec85e14abaf7d7791ca-merged.mount: Deactivated successfully.
Jan 27 09:02:52 compute-0 podman[265054]: 2026-01-27 09:02:52.127612633 +0000 UTC m=+1.079454998 container remove 5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:02:52 compute-0 systemd[1]: libpod-conmon-5fc71104ba2036575c48fcdffc1f1fd9b63673066b381c957ac077fc8feb3190.scope: Deactivated successfully.
Jan 27 09:02:52 compute-0 sudo[264947]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:52 compute-0 sudo[265100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:52 compute-0 sudo[265100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:52 compute-0 sudo[265100]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:52 compute-0 sudo[265125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:02:52 compute-0 sudo[265125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:52 compute-0 sudo[265125]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:52 compute-0 sudo[265150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:52 compute-0 sudo[265150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:52 compute-0 sudo[265150]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:52 compute-0 sudo[265175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:02:52 compute-0 sudo[265175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.691182237 +0000 UTC m=+0.052722181 container create adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sutherland, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:52 compute-0 systemd[1]: Started libpod-conmon-adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276.scope.
Jan 27 09:02:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.663657985 +0000 UTC m=+0.025197969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.761062646 +0000 UTC m=+0.122602570 container init adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.766464204 +0000 UTC m=+0.128004108 container start adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.76926697 +0000 UTC m=+0.130806914 container attach adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sutherland, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:02:52 compute-0 sleepy_sutherland[265256]: 167 167
Jan 27 09:02:52 compute-0 systemd[1]: libpod-adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276.scope: Deactivated successfully.
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.771036708 +0000 UTC m=+0.132576622 container died adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 09:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb8cf6e58c4be625d7abd459516d967de6fcc7f05715db38a3dfddeee9818ccc-merged.mount: Deactivated successfully.
Jan 27 09:02:52 compute-0 podman[265239]: 2026-01-27 09:02:52.800150444 +0000 UTC m=+0.161690348 container remove adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 09:02:52 compute-0 systemd[1]: libpod-conmon-adc0489ac2ea94f3ab13056947264ec412fccd93a77be8eb927a321095995276.scope: Deactivated successfully.
Jan 27 09:02:52 compute-0 podman[265279]: 2026-01-27 09:02:52.961253784 +0000 UTC m=+0.045289148 container create ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:02:52 compute-0 systemd[1]: Started libpod-conmon-ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4.scope.
Jan 27 09:02:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9af8f030e3c7056fb45a9037369024e274d2d479d84f76db62caee88952560/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9af8f030e3c7056fb45a9037369024e274d2d479d84f76db62caee88952560/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9af8f030e3c7056fb45a9037369024e274d2d479d84f76db62caee88952560/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9af8f030e3c7056fb45a9037369024e274d2d479d84f76db62caee88952560/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 compute-0 podman[265279]: 2026-01-27 09:02:53.028670656 +0000 UTC m=+0.112706060 container init ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:02:53 compute-0 podman[265279]: 2026-01-27 09:02:53.037679562 +0000 UTC m=+0.121714936 container start ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:53 compute-0 podman[265279]: 2026-01-27 09:02:52.946266965 +0000 UTC m=+0.030302349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:02:53 compute-0 podman[265279]: 2026-01-27 09:02:53.041439245 +0000 UTC m=+0.125474619 container attach ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilbur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:53.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:53 compute-0 ceph-mon[74357]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]: {
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:     "0": [
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:         {
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "devices": [
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "/dev/loop3"
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             ],
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "lv_name": "ceph_lv0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "lv_size": "7511998464",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "name": "ceph_lv0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "tags": {
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.cluster_name": "ceph",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.crush_device_class": "",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.encrypted": "0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.osd_id": "0",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.type": "block",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:                 "ceph.vdo": "0"
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             },
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "type": "block",
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:             "vg_name": "ceph_vg0"
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:         }
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]:     ]
Jan 27 09:02:53 compute-0 sharp_wilbur[265296]: }
Jan 27 09:02:53 compute-0 systemd[1]: libpod-ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4.scope: Deactivated successfully.
Jan 27 09:02:53 compute-0 podman[265279]: 2026-01-27 09:02:53.772565976 +0000 UTC m=+0.856601340 container died ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilbur, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:02:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9af8f030e3c7056fb45a9037369024e274d2d479d84f76db62caee88952560-merged.mount: Deactivated successfully.
Jan 27 09:02:53 compute-0 podman[265279]: 2026-01-27 09:02:53.82101969 +0000 UTC m=+0.905055054 container remove ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilbur, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 09:02:53 compute-0 systemd[1]: libpod-conmon-ac637f78dec1dc456ba67c9bbe0c27a1efc2d783428e33efdbe5cf60bc81fdd4.scope: Deactivated successfully.
Jan 27 09:02:53 compute-0 sudo[265175]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:53 compute-0 sudo[265316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:53 compute-0 sudo[265316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:53 compute-0 sudo[265316]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:53 compute-0 sudo[265341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:02:53 compute-0 sudo[265341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:54 compute-0 sudo[265341]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:54 compute-0 sudo[265366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:54 compute-0 sudo[265366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:54 compute-0 sudo[265366]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:54 compute-0 sudo[265391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:02:54 compute-0 sudo[265391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:02:54.244 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:02:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:02:54.246 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:02:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:02:54.246 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.538087517 +0000 UTC m=+0.047147059 container create edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 09:02:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:54 compute-0 systemd[1]: Started libpod-conmon-edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484.scope.
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.516107497 +0000 UTC m=+0.025167059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:02:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.634326707 +0000 UTC m=+0.143386269 container init edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.641990745 +0000 UTC m=+0.151050247 container start edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.645326057 +0000 UTC m=+0.154385579 container attach edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:02:54 compute-0 suspicious_jemison[265474]: 167 167
Jan 27 09:02:54 compute-0 systemd[1]: libpod-edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484.scope: Deactivated successfully.
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.651203268 +0000 UTC m=+0.160262780 container died edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 09:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9061d021df8941372eec8304781ef48b138051149f38e33b2907835d51902a78-merged.mount: Deactivated successfully.
Jan 27 09:02:54 compute-0 podman[265457]: 2026-01-27 09:02:54.697685057 +0000 UTC m=+0.206744569 container remove edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 09:02:54 compute-0 systemd[1]: libpod-conmon-edfd0482671bcbb7af9a52fbc34645bc5fbc8eaea521b04fd23106a886020484.scope: Deactivated successfully.
Jan 27 09:02:54 compute-0 podman[265497]: 2026-01-27 09:02:54.880668956 +0000 UTC m=+0.044760994 container create b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:54 compute-0 systemd[1]: Started libpod-conmon-b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4.scope.
Jan 27 09:02:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105f0ef5a5476a2970f7ddc4e82e4e0a96de8c1fb32bcc48477f35738c50991/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105f0ef5a5476a2970f7ddc4e82e4e0a96de8c1fb32bcc48477f35738c50991/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105f0ef5a5476a2970f7ddc4e82e4e0a96de8c1fb32bcc48477f35738c50991/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:54 compute-0 podman[265497]: 2026-01-27 09:02:54.861587134 +0000 UTC m=+0.025679222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105f0ef5a5476a2970f7ddc4e82e4e0a96de8c1fb32bcc48477f35738c50991/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:55 compute-0 podman[265497]: 2026-01-27 09:02:55.021433991 +0000 UTC m=+0.185526039 container init b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:55 compute-0 podman[265497]: 2026-01-27 09:02:55.028666938 +0000 UTC m=+0.192758976 container start b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:02:55 compute-0 podman[265497]: 2026-01-27 09:02:55.032434801 +0000 UTC m=+0.196526839 container attach b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hodgkin, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 09:02:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:55.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:55 compute-0 ceph-mon[74357]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]: {
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:         "osd_id": 0,
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:         "type": "bluestore"
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]:     }
Jan 27 09:02:55 compute-0 musing_hodgkin[265513]: }
Jan 27 09:02:55 compute-0 systemd[1]: libpod-b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4.scope: Deactivated successfully.
Jan 27 09:02:55 compute-0 podman[265497]: 2026-01-27 09:02:55.813492497 +0000 UTC m=+0.977584535 container died b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 09:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4105f0ef5a5476a2970f7ddc4e82e4e0a96de8c1fb32bcc48477f35738c50991-merged.mount: Deactivated successfully.
Jan 27 09:02:55 compute-0 podman[265497]: 2026-01-27 09:02:55.869562639 +0000 UTC m=+1.033654677 container remove b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 27 09:02:55 compute-0 systemd[1]: libpod-conmon-b224174689e50631b996b1387aed735e1f2018033383f49f9c159607f39010b4.scope: Deactivated successfully.
Jan 27 09:02:55 compute-0 sudo[265391]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:02:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:02:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:02:55 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:02:55 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 491f25c7-641f-45b0-a8c9-6d32c3515dd1 does not exist
Jan 27 09:02:55 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3ec9a162-8f9c-46fa-a93c-4d8d4f7b7711 does not exist
Jan 27 09:02:55 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 406892c0-e7f0-4e14-9f35-361c2f9cdad7 does not exist
Jan 27 09:02:55 compute-0 sudo[265545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:02:55 compute-0 sudo[265545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:55 compute-0 sudo[265545]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:56 compute-0 sudo[265570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:02:56 compute-0 sudo[265570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:02:56 compute-0 sudo[265570]: pam_unix(sudo:session): session closed for user root
Jan 27 09:02:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:56 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:02:56 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:02:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:57.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:57 compute-0 ceph-mon[74357]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:58 compute-0 podman[265596]: 2026-01-27 09:02:58.273598697 +0000 UTC m=+0.079352759 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:02:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:02:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:02:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:02:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:02:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:02:59 compute-0 ceph-mon[74357]: pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:02:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2231823351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:02:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2231823351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:03:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.710699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504580710740, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 818, "num_deletes": 506, "total_data_size": 661632, "memory_usage": 677720, "flush_reason": "Manual Compaction"}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504580716683, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 653884, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26326, "largest_seqno": 27143, "table_properties": {"data_size": 650148, "index_size": 1068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11618, "raw_average_key_size": 18, "raw_value_size": 640642, "raw_average_value_size": 1015, "num_data_blocks": 47, "num_entries": 631, "num_filter_entries": 631, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504545, "oldest_key_time": 1769504545, "file_creation_time": 1769504580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 6015 microseconds, and 2799 cpu microseconds.
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.716717) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 653884 bytes OK
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.716730) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.718520) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.718533) EVENT_LOG_v1 {"time_micros": 1769504580718529, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.718548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 656516, prev total WAL file size 656516, number of live WAL files 2.
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.718922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(638KB)], [59(10MB)]
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504580718951, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11920840, "oldest_snapshot_seqno": -1}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 4909 keys, 7965729 bytes, temperature: kUnknown
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504580765792, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 7965729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7933388, "index_size": 18983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 125931, "raw_average_key_size": 25, "raw_value_size": 7844955, "raw_average_value_size": 1598, "num_data_blocks": 770, "num_entries": 4909, "num_filter_entries": 4909, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.766399) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7965729 bytes
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.767598) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.3 rd, 169.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.7 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(30.4) write-amplify(12.2) OK, records in: 5934, records dropped: 1025 output_compression: NoCompression
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.767630) EVENT_LOG_v1 {"time_micros": 1769504580767616, "job": 32, "event": "compaction_finished", "compaction_time_micros": 47067, "compaction_time_cpu_micros": 18188, "output_level": 6, "num_output_files": 1, "total_output_size": 7965729, "num_input_records": 5934, "num_output_records": 4909, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504580768048, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504580772204, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.718861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.772268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.772272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.772274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.772275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:03:00 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:03:00.772277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:03:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:01.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:01 compute-0 ceph-mon[74357]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:03 compute-0 nova_compute[247671]: 2026-01-27 09:03:03.888 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:03 compute-0 nova_compute[247671]: 2026-01-27 09:03:03.889 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:03:04 compute-0 ceph-mon[74357]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:04 compute-0 nova_compute[247671]: 2026-01-27 09:03:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:04 compute-0 nova_compute[247671]: 2026-01-27 09:03:04.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:04 compute-0 nova_compute[247671]: 2026-01-27 09:03:04.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:05.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:05 compute-0 nova_compute[247671]: 2026-01-27 09:03:05.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:05.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:06 compute-0 ceph-mon[74357]: pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:06 compute-0 nova_compute[247671]: 2026-01-27 09:03:06.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:06 compute-0 nova_compute[247671]: 2026-01-27 09:03:06.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:03:06 compute-0 nova_compute[247671]: 2026-01-27 09:03:06.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:03:06 compute-0 nova_compute[247671]: 2026-01-27 09:03:06.497 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:03:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:07.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:07 compute-0 nova_compute[247671]: 2026-01-27 09:03:07.504 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:07.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:07 compute-0 sudo[265621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:07 compute-0 sudo[265621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:07 compute-0 sudo[265621]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:07 compute-0 sudo[265646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:07 compute-0 sudo[265646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:07 compute-0 sudo[265646]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:08 compute-0 ceph-mon[74357]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:09.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:09 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1188982578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:10 compute-0 ceph-mon[74357]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4101707272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:10 compute-0 nova_compute[247671]: 2026-01-27 09:03:10.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:11.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1405747127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:12 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:03:12.030 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:03:12 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:03:12.032 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:03:12 compute-0 ceph-mon[74357]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/40361642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.454 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.454 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.454 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.455 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.455 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:03:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:12 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:03:12 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981250465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:12 compute-0 nova_compute[247671]: 2026-01-27 09:03:12.908 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:03:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:13.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.101 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.102 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.103 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.103 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.317 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.317 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.318 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.358 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:03:13 compute-0 ceph-mon[74357]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1981250465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:13 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:03:13 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274345643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.849 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.857 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.920 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.922 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:03:13 compute-0 nova_compute[247671]: 2026-01-27 09:03:13.923 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:03:14 compute-0 podman[265718]: 2026-01-27 09:03:14.300403809 +0000 UTC m=+0.104585208 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:03:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3274345643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:03:14 compute-0 nova_compute[247671]: 2026-01-27 09:03:14.924 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:03:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:03:15
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups', 'images', 'volumes', 'default.rgw.log']
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:03:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:03:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:15 compute-0 ceph-mon[74357]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:17.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:17.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:17 compute-0 ceph-mon[74357]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:18 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:03:18.033 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:03:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:19.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:19.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:19 compute-0 ceph-mon[74357]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:21.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:21.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:21 compute-0 ceph-mon[74357]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:23.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:23.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:23 compute-0 ceph-mon[74357]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:03:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:25.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:25.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:25 compute-0 ceph-mon[74357]: pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:27 compute-0 sudo[265751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:27 compute-0 sudo[265751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:27 compute-0 sudo[265751]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:28 compute-0 sudo[265776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:28 compute-0 sudo[265776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:28 compute-0 sudo[265776]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:28 compute-0 ceph-mon[74357]: pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:29.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:29 compute-0 ceph-mon[74357]: pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:29 compute-0 podman[265802]: 2026-01-27 09:03:29.2666235 +0000 UTC m=+0.072817100 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:03:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:31 compute-0 ceph-mon[74357]: pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:33.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:33 compute-0 ceph-mon[74357]: pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:35.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:35 compute-0 ceph-mon[74357]: pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:37.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:37 compute-0 ceph-mon[74357]: pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:39.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:39 compute-0 ceph-mon[74357]: pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:42 compute-0 ceph-mon[74357]: pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:43.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:43.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:44 compute-0 ceph-mon[74357]: pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:45.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:45 compute-0 podman[265832]: 2026-01-27 09:03:45.266453865 +0000 UTC m=+0.084146519 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:03:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:46 compute-0 ceph-mon[74357]: pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:47.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:47.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:48 compute-0 sudo[265859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:48 compute-0 sudo[265859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:48 compute-0 sudo[265859]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:48 compute-0 sudo[265884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:48 compute-0 sudo[265884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:48 compute-0 sudo[265884]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:48 compute-0 ceph-mon[74357]: pgmap v1212: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:49.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:49 compute-0 ceph-mon[74357]: pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:49.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:51.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:51 compute-0 ceph-mon[74357]: pgmap v1214: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:03:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:03:53 compute-0 ceph-mon[74357]: pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:03:54.245 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:03:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:03:54.245 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:03:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:03:54.245 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:03:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:55 compute-0 ceph-mon[74357]: pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:56 compute-0 sudo[265913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:56 compute-0 sudo[265913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:56 compute-0 sudo[265913]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:56 compute-0 sudo[265938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:03:56 compute-0 sudo[265938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:56 compute-0 sudo[265938]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:56 compute-0 sudo[265963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:56 compute-0 sudo[265963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:56 compute-0 sudo[265963]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:56 compute-0 sudo[265989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 09:03:56 compute-0 sudo[265989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:57.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:57 compute-0 podman[266087]: 2026-01-27 09:03:57.142539615 +0000 UTC m=+0.246790393 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:03:57 compute-0 podman[266087]: 2026-01-27 09:03:57.263316924 +0000 UTC m=+0.367567712 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:03:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 09:03:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:03:57 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 09:03:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:57 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:03:57 compute-0 ceph-mon[74357]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:03:57 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:03:58 compute-0 podman[266226]: 2026-01-27 09:03:58.06855544 +0000 UTC m=+0.224439972 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 09:03:58 compute-0 podman[266245]: 2026-01-27 09:03:58.180055706 +0000 UTC m=+0.095772907 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 09:03:58 compute-0 podman[266226]: 2026-01-27 09:03:58.205715846 +0000 UTC m=+0.361600378 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 09:03:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:03:58 compute-0 podman[266290]: 2026-01-27 09:03:58.618048701 +0000 UTC m=+0.219199850 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Jan 27 09:03:58 compute-0 podman[266311]: 2026-01-27 09:03:58.693997145 +0000 UTC m=+0.058609222 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, release=1793, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Jan 27 09:03:58 compute-0 podman[266290]: 2026-01-27 09:03:58.752915735 +0000 UTC m=+0.354066864 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Jan 27 09:03:58 compute-0 sudo[265989]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:03:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:03:59.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:03:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:03:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:03:59 compute-0 sudo[266342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:59 compute-0 sudo[266342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:59 compute-0 sudo[266342]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:59 compute-0 sudo[266368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:03:59 compute-0 sudo[266368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:59 compute-0 sudo[266368]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:59 compute-0 podman[266366]: 2026-01-27 09:03:59.59147019 +0000 UTC m=+0.066810266 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:03:59 compute-0 sudo[266408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:03:59 compute-0 sudo[266408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:03:59 compute-0 sudo[266408]: pam_unix(sudo:session): session closed for user root
Jan 27 09:03:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:03:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:03:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:03:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:03:59 compute-0 sudo[266435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:03:59 compute-0 sudo[266435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:00 compute-0 sudo[266435]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:04:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:04:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:04:00 compute-0 ceph-mon[74357]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2053997084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:04:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2053997084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:04:00 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:00 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:00 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev bf542047-93df-4422-8b8e-21c6af5daa94 does not exist
Jan 27 09:04:00 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9b4d7214-5ddf-49da-b190-c0fe2f93bcce does not exist
Jan 27 09:04:00 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 020eb710-2655-480a-aed5-97164270536b does not exist
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:04:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:04:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:04:00 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:04:00 compute-0 sudo[266490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:00 compute-0 sudo[266490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:00 compute-0 sudo[266490]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:00 compute-0 sudo[266515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:04:00 compute-0 sudo[266515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:00 compute-0 sudo[266515]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:00 compute-0 sudo[266541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:00 compute-0 sudo[266541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:00 compute-0 sudo[266541]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:00 compute-0 sudo[266566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:04:00 compute-0 sudo[266566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:00 compute-0 podman[266628]: 2026-01-27 09:04:00.931249198 +0000 UTC m=+0.041470503 container create 0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:04:00 compute-0 systemd[1]: Started libpod-conmon-0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980.scope.
Jan 27 09:04:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:04:01 compute-0 podman[266628]: 2026-01-27 09:04:00.910784049 +0000 UTC m=+0.021005394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:04:01 compute-0 podman[266628]: 2026-01-27 09:04:01.010704619 +0000 UTC m=+0.120925944 container init 0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:04:01 compute-0 podman[266628]: 2026-01-27 09:04:01.016950349 +0000 UTC m=+0.127171654 container start 0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:04:01 compute-0 podman[266628]: 2026-01-27 09:04:01.021158264 +0000 UTC m=+0.131379589 container attach 0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 09:04:01 compute-0 nostalgic_yonath[266644]: 167 167
Jan 27 09:04:01 compute-0 systemd[1]: libpod-0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980.scope: Deactivated successfully.
Jan 27 09:04:01 compute-0 podman[266628]: 2026-01-27 09:04:01.023639801 +0000 UTC m=+0.133861106 container died 0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 27 09:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8532c4ac1ded681cfc7d60881cab5ceb8b71ee527073e9e1b7fd46b81d57af-merged.mount: Deactivated successfully.
Jan 27 09:04:01 compute-0 podman[266628]: 2026-01-27 09:04:01.058276158 +0000 UTC m=+0.168497463 container remove 0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:01 compute-0 systemd[1]: libpod-conmon-0aef1922fcf97e6dc52887fa1fd89527671e35fa8f18957c105a165fff854980.scope: Deactivated successfully.
Jan 27 09:04:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:01 compute-0 podman[266666]: 2026-01-27 09:04:01.233835043 +0000 UTC m=+0.034923334 container create 916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 09:04:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:04:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:04:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:04:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:04:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:04:01 compute-0 ceph-mon[74357]: pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:01 compute-0 systemd[1]: Started libpod-conmon-916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f.scope.
Jan 27 09:04:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:04:01 compute-0 podman[266666]: 2026-01-27 09:04:01.219992345 +0000 UTC m=+0.021080656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ee4cf031e54341535346fa40c787f851512116fc88a6fee4624e67b035f7f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ee4cf031e54341535346fa40c787f851512116fc88a6fee4624e67b035f7f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ee4cf031e54341535346fa40c787f851512116fc88a6fee4624e67b035f7f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ee4cf031e54341535346fa40c787f851512116fc88a6fee4624e67b035f7f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ee4cf031e54341535346fa40c787f851512116fc88a6fee4624e67b035f7f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:01 compute-0 podman[266666]: 2026-01-27 09:04:01.328064078 +0000 UTC m=+0.129152379 container init 916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 09:04:01 compute-0 podman[266666]: 2026-01-27 09:04:01.340785626 +0000 UTC m=+0.141873907 container start 916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 09:04:01 compute-0 podman[266666]: 2026-01-27 09:04:01.343921341 +0000 UTC m=+0.145009702 container attach 916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:04:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:02 compute-0 admiring_pascal[266683]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:04:02 compute-0 admiring_pascal[266683]: --> relative data size: 1.0
Jan 27 09:04:02 compute-0 admiring_pascal[266683]: --> All data devices are unavailable
Jan 27 09:04:02 compute-0 systemd[1]: libpod-916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f.scope: Deactivated successfully.
Jan 27 09:04:02 compute-0 conmon[266683]: conmon 916517e52ea92a578722 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f.scope/container/memory.events
Jan 27 09:04:02 compute-0 podman[266666]: 2026-01-27 09:04:02.176988907 +0000 UTC m=+0.978077208 container died 916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-95ee4cf031e54341535346fa40c787f851512116fc88a6fee4624e67b035f7f4-merged.mount: Deactivated successfully.
Jan 27 09:04:02 compute-0 podman[266666]: 2026-01-27 09:04:02.23200822 +0000 UTC m=+1.033096501 container remove 916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:04:02 compute-0 systemd[1]: libpod-conmon-916517e52ea92a5787223a6331c30e39d15aca76e352abc10f3595940e19b39f.scope: Deactivated successfully.
Jan 27 09:04:02 compute-0 sudo[266566]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:02 compute-0 sudo[266712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:02 compute-0 sudo[266712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:02 compute-0 sudo[266712]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:02 compute-0 sudo[266737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:04:02 compute-0 sudo[266737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:02 compute-0 sudo[266737]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:02 compute-0 nova_compute[247671]: 2026-01-27 09:04:02.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:02 compute-0 nova_compute[247671]: 2026-01-27 09:04:02.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:04:02 compute-0 nova_compute[247671]: 2026-01-27 09:04:02.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:02 compute-0 sudo[266762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:02 compute-0 nova_compute[247671]: 2026-01-27 09:04:02.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 09:04:02 compute-0 sudo[266762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:02 compute-0 sudo[266762]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:02 compute-0 nova_compute[247671]: 2026-01-27 09:04:02.447 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 09:04:02 compute-0 sudo[266788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:04:02 compute-0 sudo[266788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.798042672 +0000 UTC m=+0.042175663 container create ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:04:02 compute-0 systemd[1]: Started libpod-conmon-ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b.scope.
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.780235525 +0000 UTC m=+0.024368546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:04:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.908141099 +0000 UTC m=+0.152274130 container init ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_diffie, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.916299052 +0000 UTC m=+0.160432053 container start ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_diffie, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.91989915 +0000 UTC m=+0.164032301 container attach ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:04:02 compute-0 boring_diffie[266869]: 167 167
Jan 27 09:04:02 compute-0 systemd[1]: libpod-ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b.scope: Deactivated successfully.
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.922152852 +0000 UTC m=+0.166286313 container died ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_diffie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-966fb685a12cca1602b6325246963b5b75f9e452d2eb21439eaaf674f7e5d12f-merged.mount: Deactivated successfully.
Jan 27 09:04:02 compute-0 podman[266852]: 2026-01-27 09:04:02.957359714 +0000 UTC m=+0.201492695 container remove ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 09:04:02 compute-0 systemd[1]: libpod-conmon-ee6cdbbbcf442fac98dc49648fd622503676023e8938f5aef9d0c92eacd95f2b.scope: Deactivated successfully.
Jan 27 09:04:03 compute-0 podman[266891]: 2026-01-27 09:04:03.098868489 +0000 UTC m=+0.038612066 container create 365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:04:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:03 compute-0 systemd[1]: Started libpod-conmon-365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6.scope.
Jan 27 09:04:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e4b245370d07a700f46530c7fd36dfb539e733d2a8fab819740cfd2d2701b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e4b245370d07a700f46530c7fd36dfb539e733d2a8fab819740cfd2d2701b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e4b245370d07a700f46530c7fd36dfb539e733d2a8fab819740cfd2d2701b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e4b245370d07a700f46530c7fd36dfb539e733d2a8fab819740cfd2d2701b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:03 compute-0 podman[266891]: 2026-01-27 09:04:03.166515037 +0000 UTC m=+0.106258634 container init 365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:04:03 compute-0 podman[266891]: 2026-01-27 09:04:03.174603798 +0000 UTC m=+0.114347375 container start 365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:04:03 compute-0 podman[266891]: 2026-01-27 09:04:03.082376728 +0000 UTC m=+0.022120325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:04:03 compute-0 podman[266891]: 2026-01-27 09:04:03.178476454 +0000 UTC m=+0.118220051 container attach 365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:04:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:03 compute-0 ceph-mon[74357]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]: {
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:     "0": [
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:         {
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "devices": [
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "/dev/loop3"
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             ],
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "lv_name": "ceph_lv0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "lv_size": "7511998464",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "name": "ceph_lv0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "tags": {
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.cluster_name": "ceph",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.crush_device_class": "",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.encrypted": "0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.osd_id": "0",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.type": "block",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:                 "ceph.vdo": "0"
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             },
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "type": "block",
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:             "vg_name": "ceph_vg0"
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:         }
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]:     ]
Jan 27 09:04:03 compute-0 quizzical_dewdney[266908]: }
Jan 27 09:04:04 compute-0 systemd[1]: libpod-365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6.scope: Deactivated successfully.
Jan 27 09:04:04 compute-0 podman[266891]: 2026-01-27 09:04:04.001777432 +0000 UTC m=+0.941521009 container died 365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 09:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e4b245370d07a700f46530c7fd36dfb539e733d2a8fab819740cfd2d2701b8-merged.mount: Deactivated successfully.
Jan 27 09:04:04 compute-0 podman[266891]: 2026-01-27 09:04:04.051583213 +0000 UTC m=+0.991326790 container remove 365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 09:04:04 compute-0 systemd[1]: libpod-conmon-365b097bf46673b7953cee0dc67a80d53eb8c142bb1b6b3e6b3a3bd2e34867c6.scope: Deactivated successfully.
Jan 27 09:04:04 compute-0 sudo[266788]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:04 compute-0 sudo[266930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:04 compute-0 sudo[266930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:04 compute-0 sudo[266930]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:04 compute-0 sudo[266955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:04:04 compute-0 sudo[266955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:04 compute-0 sudo[266955]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:04 compute-0 sudo[266980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:04 compute-0 sudo[266980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:04 compute-0 sudo[266980]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:04 compute-0 sudo[267005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:04:04 compute-0 sudo[267005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:04 compute-0 nova_compute[247671]: 2026-01-27 09:04:04.448 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:04 compute-0 nova_compute[247671]: 2026-01-27 09:04:04.449 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.634536547 +0000 UTC m=+0.045149324 container create ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:04 compute-0 systemd[1]: Started libpod-conmon-ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a.scope.
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.61630892 +0000 UTC m=+0.026921717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:04:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.725892593 +0000 UTC m=+0.136505400 container init ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.732131013 +0000 UTC m=+0.142743790 container start ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.735176076 +0000 UTC m=+0.145788893 container attach ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 09:04:04 compute-0 infallible_lederberg[267088]: 167 167
Jan 27 09:04:04 compute-0 systemd[1]: libpod-ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a.scope: Deactivated successfully.
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.737477789 +0000 UTC m=+0.148090566 container died ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 09:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5566518626aa3545d9f31e84e0544771f6a855ed2552adb3d3fcb990b033a841-merged.mount: Deactivated successfully.
Jan 27 09:04:04 compute-0 podman[267072]: 2026-01-27 09:04:04.780141445 +0000 UTC m=+0.190754222 container remove ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 09:04:04 compute-0 systemd[1]: libpod-conmon-ff10182fe5e11a39485307082ab8bcb6b386c97fe106b7b673cde9b995111a2a.scope: Deactivated successfully.
Jan 27 09:04:04 compute-0 podman[267110]: 2026-01-27 09:04:04.932118717 +0000 UTC m=+0.039067239 container create 79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kilby, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:04:04 compute-0 systemd[1]: Started libpod-conmon-79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9.scope.
Jan 27 09:04:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff9b16d803e094f563e54554dc7884c771dbae71ab20e679278a51d48d3d9a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff9b16d803e094f563e54554dc7884c771dbae71ab20e679278a51d48d3d9a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff9b16d803e094f563e54554dc7884c771dbae71ab20e679278a51d48d3d9a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff9b16d803e094f563e54554dc7884c771dbae71ab20e679278a51d48d3d9a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:05 compute-0 podman[267110]: 2026-01-27 09:04:05.007875766 +0000 UTC m=+0.114824328 container init 79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:04:05 compute-0 podman[267110]: 2026-01-27 09:04:04.915286807 +0000 UTC m=+0.022235359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:04:05 compute-0 podman[267110]: 2026-01-27 09:04:05.015439022 +0000 UTC m=+0.122387554 container start 79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:04:05 compute-0 podman[267110]: 2026-01-27 09:04:05.018863556 +0000 UTC m=+0.125812088 container attach 79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:04:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:05 compute-0 nova_compute[247671]: 2026-01-27 09:04:05.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:05 compute-0 ceph-mon[74357]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:05 compute-0 distracted_kilby[267126]: {
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:         "osd_id": 0,
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:         "type": "bluestore"
Jan 27 09:04:05 compute-0 distracted_kilby[267126]:     }
Jan 27 09:04:05 compute-0 distracted_kilby[267126]: }
Jan 27 09:04:05 compute-0 systemd[1]: libpod-79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9.scope: Deactivated successfully.
Jan 27 09:04:05 compute-0 podman[267110]: 2026-01-27 09:04:05.903453919 +0000 UTC m=+1.010402461 container died 79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 09:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ff9b16d803e094f563e54554dc7884c771dbae71ab20e679278a51d48d3d9a5-merged.mount: Deactivated successfully.
Jan 27 09:04:06 compute-0 nova_compute[247671]: 2026-01-27 09:04:06.425 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:06 compute-0 nova_compute[247671]: 2026-01-27 09:04:06.425 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:04:06 compute-0 nova_compute[247671]: 2026-01-27 09:04:06.425 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:04:06 compute-0 nova_compute[247671]: 2026-01-27 09:04:06.481 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:04:06 compute-0 podman[267110]: 2026-01-27 09:04:06.50238498 +0000 UTC m=+1.609333512 container remove 79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:04:06 compute-0 sudo[267005]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:04:06 compute-0 systemd[1]: libpod-conmon-79419a5714f705268370fe7201765a86e3f0d19f7611ac371a5e73d71c77e4f9.scope: Deactivated successfully.
Jan 27 09:04:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:04:06 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:06 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 40527fc7-5728-4054-a70a-26673a7250f8 does not exist
Jan 27 09:04:06 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev bb07ed22-c8f0-4fe1-ba53-fe49d49dc89d does not exist
Jan 27 09:04:06 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0cbc2d97-47a2-4296-9a8a-4a185ce07ea5 does not exist
Jan 27 09:04:06 compute-0 sudo[267162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:06 compute-0 sudo[267162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:06 compute-0 sudo[267162]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:06 compute-0 sudo[267187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:04:06 compute-0 sudo[267187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:06 compute-0 sudo[267187]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:07 compute-0 nova_compute[247671]: 2026-01-27 09:04:07.474 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:07 compute-0 ceph-mon[74357]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:07 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:04:08 compute-0 sudo[267212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:08 compute-0 sudo[267212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:08 compute-0 sudo[267212]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:08 compute-0 sudo[267237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:08 compute-0 sudo[267237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:08 compute-0 sudo[267237]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:09 compute-0 nova_compute[247671]: 2026-01-27 09:04:09.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:09.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:09 compute-0 ceph-mon[74357]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:09 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4004914056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:10 compute-0 nova_compute[247671]: 2026-01-27 09:04:10.508 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:10 compute-0 nova_compute[247671]: 2026-01-27 09:04:10.508 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 09:04:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1068927977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:11.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:11 compute-0 ceph-mon[74357]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:12 compute-0 nova_compute[247671]: 2026-01-27 09:04:12.500 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1417488791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:13.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:13 compute-0 ceph-mon[74357]: pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1210277550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.462 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.462 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.463 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.463 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:04:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:04:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100367815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:14 compute-0 nova_compute[247671]: 2026-01-27 09:04:14.897 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.057 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.059 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.059 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.060 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:04:15
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.log', 'backups', 'volumes']
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:04:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.617 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.617 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.617 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:04:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:04:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:15.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:04:15 compute-0 nova_compute[247671]: 2026-01-27 09:04:15.654 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:04:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:04:16 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355653043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:16 compute-0 nova_compute[247671]: 2026-01-27 09:04:16.065 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:04:16 compute-0 nova_compute[247671]: 2026-01-27 09:04:16.070 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:04:16 compute-0 nova_compute[247671]: 2026-01-27 09:04:16.101 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:04:16 compute-0 nova_compute[247671]: 2026-01-27 09:04:16.105 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:04:16 compute-0 nova_compute[247671]: 2026-01-27 09:04:16.105 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:04:16 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:04:16.181 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:04:16 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:04:16.182 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:04:16 compute-0 ceph-mon[74357]: pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1100367815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:16 compute-0 podman[267310]: 2026-01-27 09:04:16.298753285 +0000 UTC m=+0.116721732 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 09:04:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 09:04:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:17 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3355653043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:04:17 compute-0 ceph-mon[74357]: pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 09:04:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:04:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:04:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 09:04:19 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 27 09:04:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:19.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:20 compute-0 ceph-mon[74357]: pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 27 09:04:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 70 MiB data, 218 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 977 KiB/s wr, 27 op/s
Jan 27 09:04:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:21 compute-0 ceph-mon[74357]: pgmap v1229: 305 pgs: 305 active+clean; 70 MiB data, 218 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 977 KiB/s wr, 27 op/s
Jan 27 09:04:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 245 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:04:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:23.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:23 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:04:23.184 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:04:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:23 compute-0 ceph-mon[74357]: pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 245 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:04:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 88 MiB data, 245 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:04:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:25.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:26 compute-0 ceph-mon[74357]: pgmap v1231: 305 pgs: 305 active+clean; 88 MiB data, 245 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:04:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 27 09:04:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:27.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:27 compute-0 ceph-mon[74357]: pgmap v1232: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 27 09:04:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:27.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:28 compute-0 sudo[267343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:28 compute-0 sudo[267343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:28 compute-0 sudo[267343]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:28 compute-0 sudo[267369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:28 compute-0 sudo[267369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:28 compute-0 sudo[267369]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 27 09:04:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:29.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:29 compute-0 ceph-mon[74357]: pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 27 09:04:30 compute-0 podman[267394]: 2026-01-27 09:04:30.242460351 +0000 UTC m=+0.055390286 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:04:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 27 09:04:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:31.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:31 compute-0 ceph-mon[74357]: pgmap v1234: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 27 09:04:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 839 KiB/s wr, 15 op/s
Jan 27 09:04:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:33.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:33 compute-0 ceph-mon[74357]: pgmap v1235: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 839 KiB/s wr, 15 op/s
Jan 27 09:04:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 682 B/s wr, 2 op/s
Jan 27 09:04:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:35.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:35 compute-0 ceph-mon[74357]: pgmap v1236: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 682 B/s wr, 2 op/s
Jan 27 09:04:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 682 B/s wr, 2 op/s
Jan 27 09:04:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:37.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:04:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:04:38 compute-0 ceph-mon[74357]: pgmap v1237: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 682 B/s wr, 2 op/s
Jan 27 09:04:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:04:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:39.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:40 compute-0 ceph-mon[74357]: pgmap v1238: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:04:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:04:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:41.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:41 compute-0 ceph-mon[74357]: pgmap v1239: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:04:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:43.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:44 compute-0 ceph-mon[74357]: pgmap v1240: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:04:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:45.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:04:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:46 compute-0 ceph-mon[74357]: pgmap v1241: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:47.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:47 compute-0 podman[267422]: 2026-01-27 09:04:47.268626379 +0000 UTC m=+0.081023203 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 09:04:47 compute-0 ceph-mon[74357]: pgmap v1242: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:48 compute-0 sudo[267450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:48 compute-0 sudo[267450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:48 compute-0 sudo[267450]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:48 compute-0 sudo[267475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:04:48 compute-0 sudo[267475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:04:48 compute-0 sudo[267475]: pam_unix(sudo:session): session closed for user root
Jan 27 09:04:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:49.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:49.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:50 compute-0 ceph-mon[74357]: pgmap v1243: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:51.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:51 compute-0 ceph-mon[74357]: pgmap v1244: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:04:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:04:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:53.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:53 compute-0 ceph-mon[74357]: pgmap v1245: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:04:54.245 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:04:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:04:54.246 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:04:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:04:54.246 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:04:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:55.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:55 compute-0 ceph-mon[74357]: pgmap v1246: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:57.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:57 compute-0 ceph-mon[74357]: pgmap v1247: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:04:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:04:59.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:04:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:04:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:04:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:04:59.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:00 compute-0 ceph-mon[74357]: pgmap v1248: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2278437786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2278437786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:00 compute-0 podman[267506]: 2026-01-27 09:05:00.988854043 +0000 UTC m=+0.054489253 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:05:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:01.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:01.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:02 compute-0 ceph-mon[74357]: pgmap v1249: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:03.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:04 compute-0 ceph-mon[74357]: pgmap v1250: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:05 compute-0 nova_compute[247671]: 2026-01-27 09:05:05.105 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:05 compute-0 nova_compute[247671]: 2026-01-27 09:05:05.105 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:05 compute-0 nova_compute[247671]: 2026-01-27 09:05:05.106 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:05:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:05.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:05 compute-0 ceph-mon[74357]: pgmap v1251: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:05 compute-0 nova_compute[247671]: 2026-01-27 09:05:05.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:05.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:06 compute-0 nova_compute[247671]: 2026-01-27 09:05:06.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:06 compute-0 nova_compute[247671]: 2026-01-27 09:05:06.437 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:06 compute-0 nova_compute[247671]: 2026-01-27 09:05:06.437 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:05:06 compute-0 nova_compute[247671]: 2026-01-27 09:05:06.438 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:05:06 compute-0 nova_compute[247671]: 2026-01-27 09:05:06.454 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:05:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:07.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:07 compute-0 sudo[267530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:07 compute-0 sudo[267530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:07 compute-0 sudo[267530]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:07 compute-0 sudo[267555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:05:07 compute-0 sudo[267555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:07 compute-0 sudo[267555]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:07 compute-0 sudo[267580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:07 compute-0 sudo[267580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:07 compute-0 sudo[267580]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:07 compute-0 sudo[267605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:05:07 compute-0 sudo[267605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:07 compute-0 nova_compute[247671]: 2026-01-27 09:05:07.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:07.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:07 compute-0 sudo[267605]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:07 compute-0 ceph-mon[74357]: pgmap v1252: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:05:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:05:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:05:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:05:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:05:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:05:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c752a426-7be9-417e-ac75-67db51a1a410 does not exist
Jan 27 09:05:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8ad6a7bf-e4b5-42d4-982f-9d8ffbdee39c does not exist
Jan 27 09:05:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d337dc97-280c-49a6-b4bb-4a47bc119d0c does not exist
Jan 27 09:05:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:05:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:05:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:05:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:05:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:05:08 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:05:08 compute-0 sudo[267663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:08 compute-0 sudo[267663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:08 compute-0 sudo[267663]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:08 compute-0 sudo[267688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:05:08 compute-0 sudo[267688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:08 compute-0 sudo[267688]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:08 compute-0 sudo[267713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:08 compute-0 sudo[267713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:08 compute-0 sudo[267713]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:08 compute-0 sudo[267738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:05:08 compute-0 sudo[267738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:08 compute-0 nova_compute[247671]: 2026-01-27 09:05:08.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.539975267 +0000 UTC m=+0.022568854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.659245378 +0000 UTC m=+0.141838935 container create b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:05:08 compute-0 systemd[1]: Started libpod-conmon-b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4.scope.
Jan 27 09:05:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:05:08 compute-0 sudo[267821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:08 compute-0 sudo[267821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.753243832 +0000 UTC m=+0.235837409 container init b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:05:08 compute-0 sudo[267821]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.760020236 +0000 UTC m=+0.242613793 container start b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 27 09:05:08 compute-0 determined_booth[267822]: 167 167
Jan 27 09:05:08 compute-0 systemd[1]: libpod-b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4.scope: Deactivated successfully.
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.7782188 +0000 UTC m=+0.260812357 container attach b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.779655489 +0000 UTC m=+0.262249056 container died b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 09:05:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e89368e430942dcc42a7710d9e3ddb334b72c92101cf9812071819087443732-merged.mount: Deactivated successfully.
Jan 27 09:05:08 compute-0 sudo[267851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:08 compute-0 sudo[267851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:08 compute-0 sudo[267851]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:08 compute-0 podman[267805]: 2026-01-27 09:05:08.868429351 +0000 UTC m=+0.351022908 container remove b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:05:08 compute-0 systemd[1]: libpod-conmon-b4ea9e7ba390eb7c095c553b77a211f6e6efeb8eff10e685ca5678b16ff869b4.scope: Deactivated successfully.
Jan 27 09:05:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:05:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:05:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:05:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:05:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:05:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:05:09 compute-0 podman[267896]: 2026-01-27 09:05:09.034854422 +0000 UTC m=+0.047395718 container create e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:05:09 compute-0 systemd[1]: Started libpod-conmon-e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb.scope.
Jan 27 09:05:09 compute-0 podman[267896]: 2026-01-27 09:05:09.012613098 +0000 UTC m=+0.025154414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:05:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a814850ce00fd2c855e86f2cd33855388f87be7968f70a152e177f0f5afaae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a814850ce00fd2c855e86f2cd33855388f87be7968f70a152e177f0f5afaae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a814850ce00fd2c855e86f2cd33855388f87be7968f70a152e177f0f5afaae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a814850ce00fd2c855e86f2cd33855388f87be7968f70a152e177f0f5afaae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a814850ce00fd2c855e86f2cd33855388f87be7968f70a152e177f0f5afaae9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:09.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:09 compute-0 podman[267896]: 2026-01-27 09:05:09.234682181 +0000 UTC m=+0.247223527 container init e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:05:09 compute-0 podman[267896]: 2026-01-27 09:05:09.248962009 +0000 UTC m=+0.261503305 container start e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 09:05:09 compute-0 podman[267896]: 2026-01-27 09:05:09.298561677 +0000 UTC m=+0.311103003 container attach e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:05:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:10 compute-0 lucid_shaw[267913]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:05:10 compute-0 lucid_shaw[267913]: --> relative data size: 1.0
Jan 27 09:05:10 compute-0 lucid_shaw[267913]: --> All data devices are unavailable
Jan 27 09:05:10 compute-0 systemd[1]: libpod-e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb.scope: Deactivated successfully.
Jan 27 09:05:10 compute-0 podman[267896]: 2026-01-27 09:05:10.085554677 +0000 UTC m=+1.098095993 container died e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a814850ce00fd2c855e86f2cd33855388f87be7968f70a152e177f0f5afaae9-merged.mount: Deactivated successfully.
Jan 27 09:05:10 compute-0 ceph-mon[74357]: pgmap v1253: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/965814207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1491680667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1491680667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:10 compute-0 podman[267896]: 2026-01-27 09:05:10.139233096 +0000 UTC m=+1.151774412 container remove e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:10 compute-0 systemd[1]: libpod-conmon-e47c1197a73c39ade4335995468c1cf23130d96c89073a9a051db8c3c1a702bb.scope: Deactivated successfully.
Jan 27 09:05:10 compute-0 sudo[267738]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:10 compute-0 sudo[267940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:10 compute-0 sudo[267940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:10 compute-0 sudo[267940]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:10 compute-0 sudo[267965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:05:10 compute-0 sudo[267965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:10 compute-0 sudo[267965]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:10 compute-0 sudo[267990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:10 compute-0 sudo[267990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:10 compute-0 sudo[267990]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:10 compute-0 sudo[268015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:05:10 compute-0 sudo[268015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:05:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.751227512 +0000 UTC m=+0.040232414 container create 0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:05:10 compute-0 systemd[1]: Started libpod-conmon-0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1.scope.
Jan 27 09:05:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.73381909 +0000 UTC m=+0.022824022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.843463048 +0000 UTC m=+0.132467970 container init 0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swartz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.851662931 +0000 UTC m=+0.140667823 container start 0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.854642232 +0000 UTC m=+0.143647154 container attach 0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:05:10 compute-0 epic_swartz[268098]: 167 167
Jan 27 09:05:10 compute-0 systemd[1]: libpod-0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1.scope: Deactivated successfully.
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.858163667 +0000 UTC m=+0.147168589 container died 0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swartz, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1426cd30ddabd56e2e0d5235c2d541e0c4e78698c2f7ced8185cc1682767c76d-merged.mount: Deactivated successfully.
Jan 27 09:05:10 compute-0 podman[268082]: 2026-01-27 09:05:10.898510413 +0000 UTC m=+0.187515315 container remove 0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swartz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 09:05:10 compute-0 systemd[1]: libpod-conmon-0e840e988e5be362a73bf865364ec34119ba1e1405f1f2d68b5e5ca0612b00c1.scope: Deactivated successfully.
Jan 27 09:05:11 compute-0 podman[268122]: 2026-01-27 09:05:11.060693219 +0000 UTC m=+0.041962071 container create 330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:05:11 compute-0 systemd[1]: Started libpod-conmon-330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7.scope.
Jan 27 09:05:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61a81b867060112cbd35b738b3fbae3ce7a9e946f30f68be9cac1ea2b5fa804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61a81b867060112cbd35b738b3fbae3ce7a9e946f30f68be9cac1ea2b5fa804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61a81b867060112cbd35b738b3fbae3ce7a9e946f30f68be9cac1ea2b5fa804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61a81b867060112cbd35b738b3fbae3ce7a9e946f30f68be9cac1ea2b5fa804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:11 compute-0 podman[268122]: 2026-01-27 09:05:11.042199128 +0000 UTC m=+0.023468010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:05:11 compute-0 podman[268122]: 2026-01-27 09:05:11.146778338 +0000 UTC m=+0.128047210 container init 330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:05:11 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1865757354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:11 compute-0 podman[268122]: 2026-01-27 09:05:11.155739332 +0000 UTC m=+0.137008184 container start 330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 09:05:11 compute-0 podman[268122]: 2026-01-27 09:05:11.159185955 +0000 UTC m=+0.140454797 container attach 330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:05:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:11.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:11.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:12 compute-0 naughty_morse[268138]: {
Jan 27 09:05:12 compute-0 naughty_morse[268138]:     "0": [
Jan 27 09:05:12 compute-0 naughty_morse[268138]:         {
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "devices": [
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "/dev/loop3"
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             ],
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "lv_name": "ceph_lv0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "lv_size": "7511998464",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "name": "ceph_lv0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "tags": {
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.cluster_name": "ceph",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.crush_device_class": "",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.encrypted": "0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.osd_id": "0",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.type": "block",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:                 "ceph.vdo": "0"
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             },
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "type": "block",
Jan 27 09:05:12 compute-0 naughty_morse[268138]:             "vg_name": "ceph_vg0"
Jan 27 09:05:12 compute-0 naughty_morse[268138]:         }
Jan 27 09:05:12 compute-0 naughty_morse[268138]:     ]
Jan 27 09:05:12 compute-0 naughty_morse[268138]: }
Jan 27 09:05:12 compute-0 systemd[1]: libpod-330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7.scope: Deactivated successfully.
Jan 27 09:05:12 compute-0 podman[268122]: 2026-01-27 09:05:12.044092326 +0000 UTC m=+1.025361178 container died 330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d61a81b867060112cbd35b738b3fbae3ce7a9e946f30f68be9cac1ea2b5fa804-merged.mount: Deactivated successfully.
Jan 27 09:05:12 compute-0 podman[268122]: 2026-01-27 09:05:12.098358091 +0000 UTC m=+1.079626943 container remove 330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_morse, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:12 compute-0 systemd[1]: libpod-conmon-330f145a214022a72906b1ea391fbedf767be2f40c7dae04ae3f7e1b8b3017d7.scope: Deactivated successfully.
Jan 27 09:05:12 compute-0 sudo[268015]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:12 compute-0 ceph-mon[74357]: pgmap v1254: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:05:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1937372851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1937372851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:12 compute-0 sudo[268161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:12 compute-0 sudo[268161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:12 compute-0 sudo[268161]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:12 compute-0 sudo[268186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:05:12 compute-0 sudo[268186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:12 compute-0 sudo[268186]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:12 compute-0 sudo[268211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:12 compute-0 sudo[268211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:12 compute-0 sudo[268211]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:12 compute-0 sudo[268236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:05:12 compute-0 sudo[268236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.744027711 +0000 UTC m=+0.041614411 container create f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:05:12 compute-0 systemd[1]: Started libpod-conmon-f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e.scope.
Jan 27 09:05:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.816966593 +0000 UTC m=+0.114553313 container init f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.723915475 +0000 UTC m=+0.021502195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.824082297 +0000 UTC m=+0.121669007 container start f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.827960572 +0000 UTC m=+0.125547272 container attach f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:05:12 compute-0 jovial_joliot[268318]: 167 167
Jan 27 09:05:12 compute-0 systemd[1]: libpod-f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e.scope: Deactivated successfully.
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.829932616 +0000 UTC m=+0.127519316 container died f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e571bfba25a82e846c5616075513a7ad4a207b58bedc2bae7c3143599a337b17-merged.mount: Deactivated successfully.
Jan 27 09:05:12 compute-0 podman[268302]: 2026-01-27 09:05:12.85955684 +0000 UTC m=+0.157143540 container remove f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:05:12 compute-0 systemd[1]: libpod-conmon-f0c5668ac18a04cf17d76e80aba1f93e9ff9a3da6078f906f819b27786a51b8e.scope: Deactivated successfully.
Jan 27 09:05:13 compute-0 podman[268344]: 2026-01-27 09:05:13.027345278 +0000 UTC m=+0.042726881 container create 2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_borg, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:13 compute-0 systemd[1]: Started libpod-conmon-2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd.scope.
Jan 27 09:05:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f0204b5762ac48b008657f5cbaa7ca020612ca0211771cececde0fd9701c30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f0204b5762ac48b008657f5cbaa7ca020612ca0211771cececde0fd9701c30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f0204b5762ac48b008657f5cbaa7ca020612ca0211771cececde0fd9701c30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f0204b5762ac48b008657f5cbaa7ca020612ca0211771cececde0fd9701c30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 compute-0 podman[268344]: 2026-01-27 09:05:13.00971006 +0000 UTC m=+0.025091663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:05:13 compute-0 podman[268344]: 2026-01-27 09:05:13.104179846 +0000 UTC m=+0.119561479 container init 2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 09:05:13 compute-0 podman[268344]: 2026-01-27 09:05:13.114723032 +0000 UTC m=+0.130104635 container start 2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 09:05:13 compute-0 podman[268344]: 2026-01-27 09:05:13.118696361 +0000 UTC m=+0.134077984 container attach 2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:05:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:13.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:13.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:13 compute-0 tender_borg[268361]: {
Jan 27 09:05:13 compute-0 tender_borg[268361]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:05:13 compute-0 tender_borg[268361]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:05:13 compute-0 tender_borg[268361]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:05:13 compute-0 tender_borg[268361]:         "osd_id": 0,
Jan 27 09:05:13 compute-0 tender_borg[268361]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:05:13 compute-0 tender_borg[268361]:         "type": "bluestore"
Jan 27 09:05:13 compute-0 tender_borg[268361]:     }
Jan 27 09:05:13 compute-0 tender_borg[268361]: }
Jan 27 09:05:13 compute-0 systemd[1]: libpod-2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd.scope: Deactivated successfully.
Jan 27 09:05:13 compute-0 podman[268344]: 2026-01-27 09:05:13.976723041 +0000 UTC m=+0.992104644 container died 2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f0204b5762ac48b008657f5cbaa7ca020612ca0211771cececde0fd9701c30-merged.mount: Deactivated successfully.
Jan 27 09:05:14 compute-0 podman[268344]: 2026-01-27 09:05:14.029876195 +0000 UTC m=+1.045257798 container remove 2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_borg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:14 compute-0 systemd[1]: libpod-conmon-2670609ed443c93a3e05b9d3c92bd40f88d46f8b2ed8e521790438e0069989cd.scope: Deactivated successfully.
Jan 27 09:05:14 compute-0 sudo[268236]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:05:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:05:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:05:14 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:05:14 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 44b4196b-3f6d-453d-b2eb-7990d3c6ba9c does not exist
Jan 27 09:05:14 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev a330d68e-c350-47a3-bcb7-5aa8808c7c14 does not exist
Jan 27 09:05:14 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f3e41d07-2783-43a3-b8e1-6d98f5e109e8 does not exist
Jan 27 09:05:14 compute-0 ceph-mon[74357]: pgmap v1255: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:05:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4175760959' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4175760959' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/543559399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:05:14 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:05:14 compute-0 sudo[268392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:14 compute-0 sudo[268392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:14 compute-0 sudo[268392]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:14 compute-0 sudo[268417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:05:14 compute-0 sudo[268417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:14 compute-0 sudo[268417]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.424 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.445 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.446 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.446 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.446 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.446 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:05:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:05:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:05:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649435819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:14 compute-0 nova_compute[247671]: 2026-01-27 09:05:14.860 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.009 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.011 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5111MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.011 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.011 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:05:15
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:15.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.184 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:05:15 compute-0 ceph-mon[74357]: pgmap v1256: 305 pgs: 305 active+clean; 88 MiB data, 246 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:05:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/649435819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3983468347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.203 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 896e93c2-cb3b-4849-b8c7-679ca6577232 has allocations against this compute host but is not found in the database.
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.203 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.203 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:05:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.374 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing inventories for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.433 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating ProviderTree inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.434 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.450 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing aggregate associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.473 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing trait associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.528 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:05:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:15.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:05:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851249625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.972 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:05:15 compute-0 nova_compute[247671]: 2026-01-27 09:05:15.978 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:05:16 compute-0 nova_compute[247671]: 2026-01-27 09:05:16.005 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:05:16 compute-0 nova_compute[247671]: 2026-01-27 09:05:16.007 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:05:16 compute-0 nova_compute[247671]: 2026-01-27 09:05:16.007 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:05:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/851249625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:05:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 27 09:05:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:17.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:17 compute-0 ceph-mon[74357]: pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 27 09:05:17 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:05:17.355 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:05:17 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:05:17.356 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:05:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:17.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:18 compute-0 podman[268488]: 2026-01-27 09:05:18.276844464 +0000 UTC m=+0.090482179 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:05:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 27 09:05:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:19.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:19 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:05:19.358 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:05:19 compute-0 ceph-mon[74357]: pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 27 09:05:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:19.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 27 09:05:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:21 compute-0 ceph-mon[74357]: pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 27 09:05:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:21.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 27 09:05:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:23.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:23.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:23 compute-0 ceph-mon[74357]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:05:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 852 B/s wr, 29 op/s
Jan 27 09:05:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:25.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:25.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:25 compute-0 ceph-mon[74357]: pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 852 B/s wr, 29 op/s
Jan 27 09:05:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 852 B/s wr, 29 op/s
Jan 27 09:05:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:27.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:27.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:27 compute-0 ceph-mon[74357]: pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 852 B/s wr, 29 op/s
Jan 27 09:05:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:28 compute-0 sudo[268520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:28 compute-0 sudo[268520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:28 compute-0 sudo[268520]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:28 compute-0 sudo[268545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:28 compute-0 sudo[268545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:28 compute-0 sudo[268545]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:29.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:29 compute-0 ceph-mon[74357]: pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 09:05:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 09:05:31 compute-0 podman[268571]: 2026-01-27 09:05:31.238775708 +0000 UTC m=+0.048501879 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 09:05:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:32 compute-0 ceph-mon[74357]: pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:33.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:33 compute-0 ceph-mon[74357]: pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:35.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:35 compute-0 ceph-mon[74357]: pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:37.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:37.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 27 09:05:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 27 09:05:37 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 27 09:05:37 compute-0 ceph-mon[74357]: pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:38 compute-0 ceph-mon[74357]: osdmap e156: 3 total, 3 up, 3 in
Jan 27 09:05:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:39.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 27 09:05:39 compute-0 ceph-mon[74357]: pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:05:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 27 09:05:40 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 27 09:05:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Jan 27 09:05:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:41 compute-0 ceph-mon[74357]: osdmap e157: 3 total, 3 up, 3 in
Jan 27 09:05:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 27 09:05:42 compute-0 ceph-mon[74357]: pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Jan 27 09:05:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:05:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:43.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:05:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:44 compute-0 ceph-mon[74357]: pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 27 09:05:44 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3377664095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:44 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3377664095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 27 09:05:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:45.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:45 compute-0 ceph-mon[74357]: pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 27 09:05:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.9 KiB/s wr, 41 op/s
Jan 27 09:05:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:47.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:47 compute-0 ceph-mon[74357]: pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.9 KiB/s wr, 41 op/s
Jan 27 09:05:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:47.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Jan 27 09:05:49 compute-0 sudo[268601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:49 compute-0 sudo[268601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:49 compute-0 sudo[268601]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:49 compute-0 sudo[268632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:05:49 compute-0 sudo[268632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:05:49 compute-0 sudo[268632]: pam_unix(sudo:session): session closed for user root
Jan 27 09:05:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:49.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:49 compute-0 podman[268625]: 2026-01-27 09:05:49.222016738 +0000 UTC m=+0.110045151 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:05:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:49.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:49 compute-0 ceph-mon[74357]: pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Jan 27 09:05:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.0 KiB/s wr, 28 op/s
Jan 27 09:05:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 27 09:05:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 27 09:05:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 27 09:05:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:51.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:51.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:51 compute-0 ceph-mon[74357]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.0 KiB/s wr, 28 op/s
Jan 27 09:05:51 compute-0 ceph-mon[74357]: osdmap e158: 3 total, 3 up, 3 in
Jan 27 09:05:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 27 09:05:52 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1259994102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:52 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1259994102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:53.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:53 compute-0 ceph-mon[74357]: pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 27 09:05:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:05:54.246 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:05:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:05:54.247 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:05:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:05:54.247 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:05:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 27 09:05:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:55.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:55.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:56 compute-0 ceph-mon[74357]: pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 27 09:05:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 09:05:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:57.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:57 compute-0 ceph-mon[74357]: pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 09:05:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:57.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 09:05:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:05:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573142370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:05:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573142370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:05:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:05:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:05:59.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:05:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:05:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:05:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:05:59.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:05:59 compute-0 ceph-mon[74357]: pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 09:05:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/573142370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:05:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/573142370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:06:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 79 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 493 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Jan 27 09:06:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:01.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:01 compute-0 ceph-mon[74357]: pgmap v1282: 305 pgs: 305 active+clean; 79 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 493 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Jan 27 09:06:02 compute-0 podman[268683]: 2026-01-27 09:06:02.242867924 +0000 UTC m=+0.055765826 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:06:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 417 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.111718) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504763111800, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1805, "num_deletes": 252, "total_data_size": 3258136, "memory_usage": 3313032, "flush_reason": "Manual Compaction"}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 27 09:06:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504763397571, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3199084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27144, "largest_seqno": 28948, "table_properties": {"data_size": 3190774, "index_size": 5124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17101, "raw_average_key_size": 20, "raw_value_size": 3174102, "raw_average_value_size": 3774, "num_data_blocks": 225, "num_entries": 841, "num_filter_entries": 841, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504580, "oldest_key_time": 1769504580, "file_creation_time": 1769504763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 285887 microseconds, and 6918 cpu microseconds.
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.397610) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3199084 bytes OK
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.397627) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.401562) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.401578) EVENT_LOG_v1 {"time_micros": 1769504763401573, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.401596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3250721, prev total WAL file size 3250721, number of live WAL files 2.
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.402474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3124KB)], [62(7779KB)]
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504763402536, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11164813, "oldest_snapshot_seqno": -1}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5227 keys, 9154678 bytes, temperature: kUnknown
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504763490346, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9154678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9119234, "index_size": 21254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133260, "raw_average_key_size": 25, "raw_value_size": 9024204, "raw_average_value_size": 1726, "num_data_blocks": 863, "num_entries": 5227, "num_filter_entries": 5227, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.490621) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9154678 bytes
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.492308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.0 rd, 104.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.6 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5750, records dropped: 523 output_compression: NoCompression
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.492332) EVENT_LOG_v1 {"time_micros": 1769504763492321, "job": 34, "event": "compaction_finished", "compaction_time_micros": 87892, "compaction_time_cpu_micros": 20023, "output_level": 6, "num_output_files": 1, "total_output_size": 9154678, "num_input_records": 5750, "num_output_records": 5227, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504763493320, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504763495661, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.402362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.495707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.495713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.495715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.495717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:06:03 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:06:03.495719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:06:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:03.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:04 compute-0 ceph-mon[74357]: pgmap v1283: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 417 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 27 09:06:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 09:06:05 compute-0 nova_compute[247671]: 2026-01-27 09:06:05.006 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:05 compute-0 nova_compute[247671]: 2026-01-27 09:06:05.006 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:06:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:06:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:05.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:06:05 compute-0 ceph-mon[74357]: pgmap v1284: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 09:06:05 compute-0 nova_compute[247671]: 2026-01-27 09:06:05.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:05 compute-0 nova_compute[247671]: 2026-01-27 09:06:05.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:05.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 09:06:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:07.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:07 compute-0 ceph-mon[74357]: pgmap v1285: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 09:06:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:07.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:08 compute-0 nova_compute[247671]: 2026-01-27 09:06:08.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:08 compute-0 nova_compute[247671]: 2026-01-27 09:06:08.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:08 compute-0 nova_compute[247671]: 2026-01-27 09:06:08.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:06:08 compute-0 nova_compute[247671]: 2026-01-27 09:06:08.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:06:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 09:06:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:09.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:09 compute-0 sudo[268706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:09 compute-0 sudo[268706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:09 compute-0 sudo[268706]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:09 compute-0 sudo[268731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:09 compute-0 sudo[268731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:09 compute-0 sudo[268731]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:09 compute-0 ceph-mon[74357]: pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 09:06:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 09:06:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:11 compute-0 nova_compute[247671]: 2026-01-27 09:06:11.742 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:06:11 compute-0 nova_compute[247671]: 2026-01-27 09:06:11.743 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:11 compute-0 ceph-mon[74357]: pgmap v1287: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 09:06:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 502 KiB/s wr, 2 op/s
Jan 27 09:06:12 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1647828876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:13.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:13.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:13 compute-0 ceph-mon[74357]: pgmap v1288: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 502 KiB/s wr, 2 op/s
Jan 27 09:06:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3716012850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.462 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.463 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.464 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:06:14 compute-0 sudo[268760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:14 compute-0 sudo[268760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:14 compute-0 sudo[268760]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:14 compute-0 sudo[268803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:06:14 compute-0 sudo[268803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:14 compute-0 sudo[268803]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 09:06:14 compute-0 sudo[268829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:14 compute-0 sudo[268829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:14 compute-0 sudo[268829]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:14 compute-0 sudo[268854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:06:14 compute-0 sudo[268854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:06:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018402576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:14 compute-0 nova_compute[247671]: 2026-01-27 09:06:14.893 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:06:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3181474586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2826646151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1018402576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.052 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.054 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.055 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.055 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:06:15
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:06:15 compute-0 sudo[268854]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:15.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.246 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.247 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.247 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.289 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9e7c251e-2df0-4b30-b8de-7317be27e281 does not exist
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 824620b2-bd10-4e80-af8e-5d14fe7ec814 does not exist
Jan 27 09:06:15 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev fd58282e-baf3-416c-990b-b03bf6faf37b does not exist
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:06:15 compute-0 sudo[268911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:15 compute-0 sudo[268911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:15 compute-0 sudo[268911]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:15 compute-0 sudo[268955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:06:15 compute-0 sudo[268955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:15 compute-0 sudo[268955]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:15 compute-0 sudo[268980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:15 compute-0 sudo[268980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:15 compute-0 sudo[268980]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:15 compute-0 sudo[269005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:06:15 compute-0 sudo[269005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:06:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905000270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.717 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.722 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:06:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:15.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.864 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.865 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:06:15 compute-0 nova_compute[247671]: 2026-01-27 09:06:15.865 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:06:15 compute-0 podman[269073]: 2026-01-27 09:06:15.842176361 +0000 UTC m=+0.019638365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:06:15 compute-0 podman[269073]: 2026-01-27 09:06:15.956082935 +0000 UTC m=+0.133544949 container create 36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:06:16 compute-0 systemd[1]: Started libpod-conmon-36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e.scope.
Jan 27 09:06:16 compute-0 ceph-mon[74357]: pgmap v1289: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:06:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/905000270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:06:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:06:16 compute-0 podman[269073]: 2026-01-27 09:06:16.042862883 +0000 UTC m=+0.220324887 container init 36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:06:16 compute-0 podman[269073]: 2026-01-27 09:06:16.05013579 +0000 UTC m=+0.227597764 container start 36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:06:16 compute-0 podman[269073]: 2026-01-27 09:06:16.054318364 +0000 UTC m=+0.231780358 container attach 36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:06:16 compute-0 vigorous_newton[269088]: 167 167
Jan 27 09:06:16 compute-0 systemd[1]: libpod-36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e.scope: Deactivated successfully.
Jan 27 09:06:16 compute-0 podman[269073]: 2026-01-27 09:06:16.058939669 +0000 UTC m=+0.236401643 container died 36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1810da5a3111b78ab929edcd5fa8db3e406d114b0b8ff05fcca18eb503d8d9a7-merged.mount: Deactivated successfully.
Jan 27 09:06:16 compute-0 podman[269073]: 2026-01-27 09:06:16.095278306 +0000 UTC m=+0.272740280 container remove 36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 09:06:16 compute-0 systemd[1]: libpod-conmon-36d4c3a44a1cbb8d8158380a70e09c473ef76c719f7de0bc87283473ee0ee62e.scope: Deactivated successfully.
Jan 27 09:06:16 compute-0 podman[269110]: 2026-01-27 09:06:16.251030168 +0000 UTC m=+0.040927643 container create 0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 09:06:16 compute-0 systemd[1]: Started libpod-conmon-0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3.scope.
Jan 27 09:06:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:06:16 compute-0 podman[269110]: 2026-01-27 09:06:16.234216981 +0000 UTC m=+0.024114476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430c9f140f3e960f294356df2cc6908e0f46cba758ab69a430d6c2e1e0625a72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430c9f140f3e960f294356df2cc6908e0f46cba758ab69a430d6c2e1e0625a72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430c9f140f3e960f294356df2cc6908e0f46cba758ab69a430d6c2e1e0625a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430c9f140f3e960f294356df2cc6908e0f46cba758ab69a430d6c2e1e0625a72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430c9f140f3e960f294356df2cc6908e0f46cba758ab69a430d6c2e1e0625a72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:16 compute-0 podman[269110]: 2026-01-27 09:06:16.343585723 +0000 UTC m=+0.133483218 container init 0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:06:16 compute-0 podman[269110]: 2026-01-27 09:06:16.352678209 +0000 UTC m=+0.142575684 container start 0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:06:16 compute-0 podman[269110]: 2026-01-27 09:06:16.355616429 +0000 UTC m=+0.145513944 container attach 0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:06:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 09:06:16 compute-0 nova_compute[247671]: 2026-01-27 09:06:16.866 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:16 compute-0 nova_compute[247671]: 2026-01-27 09:06:16.867 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:06:17 compute-0 awesome_curran[269126]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:06:17 compute-0 awesome_curran[269126]: --> relative data size: 1.0
Jan 27 09:06:17 compute-0 awesome_curran[269126]: --> All data devices are unavailable
Jan 27 09:06:17 compute-0 systemd[1]: libpod-0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3.scope: Deactivated successfully.
Jan 27 09:06:17 compute-0 conmon[269126]: conmon 0b3a66194b2a1b1b666e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3.scope/container/memory.events
Jan 27 09:06:17 compute-0 podman[269110]: 2026-01-27 09:06:17.15842308 +0000 UTC m=+0.948320555 container died 0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-430c9f140f3e960f294356df2cc6908e0f46cba758ab69a430d6c2e1e0625a72-merged.mount: Deactivated successfully.
Jan 27 09:06:17 compute-0 podman[269110]: 2026-01-27 09:06:17.212252572 +0000 UTC m=+1.002150037 container remove 0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 27 09:06:17 compute-0 systemd[1]: libpod-conmon-0b3a66194b2a1b1b666ec679c632f4b36b7d611f55b4d2b3f53d515676d997e3.scope: Deactivated successfully.
Jan 27 09:06:17 compute-0 sudo[269005]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:17.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:17 compute-0 sudo[269156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:17 compute-0 sudo[269156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:17 compute-0 sudo[269156]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:17 compute-0 sudo[269181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:06:17 compute-0 sudo[269181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:17 compute-0 sudo[269181]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:17 compute-0 sudo[269206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:17 compute-0 sudo[269206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:17 compute-0 sudo[269206]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:17 compute-0 sudo[269231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:06:17 compute-0 sudo[269231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:17 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:17.672 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:06:17 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:17.674 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.766293824 +0000 UTC m=+0.033701426 container create 38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:17 compute-0 systemd[1]: Started libpod-conmon-38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a.scope.
Jan 27 09:06:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:06:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:17.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.838060413 +0000 UTC m=+0.105468035 container init 38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.844414947 +0000 UTC m=+0.111822549 container start 38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:06:17 compute-0 elated_poincare[269312]: 167 167
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.75251044 +0000 UTC m=+0.019918062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:06:17 compute-0 systemd[1]: libpod-38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a.scope: Deactivated successfully.
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.84899361 +0000 UTC m=+0.116401212 container attach 38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.849652249 +0000 UTC m=+0.117059851 container died 38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-74d72315c94d8376e9d82b611d5e5f6358d8cddd7b3161ac498f5e9fff31af84-merged.mount: Deactivated successfully.
Jan 27 09:06:17 compute-0 podman[269296]: 2026-01-27 09:06:17.882174512 +0000 UTC m=+0.149582114 container remove 38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:17 compute-0 systemd[1]: libpod-conmon-38bfbfce391998fd45a5c8a15be1a88205c2ef2656691c99bf1a2ff06d65ed5a.scope: Deactivated successfully.
Jan 27 09:06:18 compute-0 podman[269336]: 2026-01-27 09:06:18.044385329 +0000 UTC m=+0.040895751 container create e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:06:18 compute-0 systemd[1]: Started libpod-conmon-e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755.scope.
Jan 27 09:06:18 compute-0 ceph-mon[74357]: pgmap v1290: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 09:06:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01f3757d4dcc0a479b0515dcc200ce627a6f23ccb9adcae9bbe77e491cb288f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01f3757d4dcc0a479b0515dcc200ce627a6f23ccb9adcae9bbe77e491cb288f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01f3757d4dcc0a479b0515dcc200ce627a6f23ccb9adcae9bbe77e491cb288f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01f3757d4dcc0a479b0515dcc200ce627a6f23ccb9adcae9bbe77e491cb288f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:18 compute-0 podman[269336]: 2026-01-27 09:06:18.025927318 +0000 UTC m=+0.022437750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:06:18 compute-0 podman[269336]: 2026-01-27 09:06:18.124584268 +0000 UTC m=+0.121094690 container init e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:06:18 compute-0 podman[269336]: 2026-01-27 09:06:18.129963574 +0000 UTC m=+0.126473986 container start e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:06:18 compute-0 podman[269336]: 2026-01-27 09:06:18.13237183 +0000 UTC m=+0.128882242 container attach e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 09:06:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]: {
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:     "0": [
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:         {
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "devices": [
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "/dev/loop3"
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             ],
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "lv_name": "ceph_lv0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "lv_size": "7511998464",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "name": "ceph_lv0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "tags": {
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.cluster_name": "ceph",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.crush_device_class": "",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.encrypted": "0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.osd_id": "0",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.type": "block",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:                 "ceph.vdo": "0"
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             },
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "type": "block",
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:             "vg_name": "ceph_vg0"
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:         }
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]:     ]
Jan 27 09:06:18 compute-0 elastic_matsumoto[269352]: }
Jan 27 09:06:18 compute-0 systemd[1]: libpod-e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755.scope: Deactivated successfully.
Jan 27 09:06:18 compute-0 podman[269336]: 2026-01-27 09:06:18.909983365 +0000 UTC m=+0.906493777 container died e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f01f3757d4dcc0a479b0515dcc200ce627a6f23ccb9adcae9bbe77e491cb288f-merged.mount: Deactivated successfully.
Jan 27 09:06:19 compute-0 podman[269336]: 2026-01-27 09:06:19.032418432 +0000 UTC m=+1.028928844 container remove e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:06:19 compute-0 systemd[1]: libpod-conmon-e83ba664aa6883a59dd0c50412178cab0339f4bec05d732662f4aa099f0e3755.scope: Deactivated successfully.
Jan 27 09:06:19 compute-0 sudo[269231]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:19 compute-0 sudo[269374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:19 compute-0 sudo[269374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:19 compute-0 sudo[269374]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:19 compute-0 sudo[269399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:06:19 compute-0 sudo[269399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:19 compute-0 sudo[269399]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:19 compute-0 sudo[269424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:19 compute-0 sudo[269424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:19 compute-0 sudo[269424]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:19 compute-0 sudo[269449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:06:19 compute-0 sudo[269449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:19 compute-0 podman[269473]: 2026-01-27 09:06:19.397099929 +0000 UTC m=+0.086936983 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.617153848 +0000 UTC m=+0.033087200 container create 8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:06:19 compute-0 systemd[1]: Started libpod-conmon-8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189.scope.
Jan 27 09:06:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.602294223 +0000 UTC m=+0.018227596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.708052517 +0000 UTC m=+0.123985889 container init 8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.717550995 +0000 UTC m=+0.133484347 container start 8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.721181814 +0000 UTC m=+0.137115166 container attach 8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 27 09:06:19 compute-0 jolly_kowalevski[269557]: 167 167
Jan 27 09:06:19 compute-0 systemd[1]: libpod-8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189.scope: Deactivated successfully.
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.725626194 +0000 UTC m=+0.141559546 container died 8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:06:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-89ecf672e273ebd33dfe5a266725de5835c8737325ac6a9e6fd549158246570c-merged.mount: Deactivated successfully.
Jan 27 09:06:19 compute-0 podman[269540]: 2026-01-27 09:06:19.766477444 +0000 UTC m=+0.182410796 container remove 8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:06:19 compute-0 systemd[1]: libpod-conmon-8ebd6f8d039e3c47e724994bfc0af0f46704b8fb79b70cf50b21c5006ac39189.scope: Deactivated successfully.
Jan 27 09:06:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:06:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:19.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:06:19 compute-0 podman[269580]: 2026-01-27 09:06:19.900649699 +0000 UTC m=+0.020880128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:06:19 compute-0 podman[269580]: 2026-01-27 09:06:19.996514304 +0000 UTC m=+0.116744713 container create 8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:06:20 compute-0 systemd[1]: Started libpod-conmon-8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484.scope.
Jan 27 09:06:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f87783091ae8997a1b74a1cdb946d15e0ebb79fa61dbb1e7ac1c52560166ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f87783091ae8997a1b74a1cdb946d15e0ebb79fa61dbb1e7ac1c52560166ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f87783091ae8997a1b74a1cdb946d15e0ebb79fa61dbb1e7ac1c52560166ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f87783091ae8997a1b74a1cdb946d15e0ebb79fa61dbb1e7ac1c52560166ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:20 compute-0 podman[269580]: 2026-01-27 09:06:20.241019076 +0000 UTC m=+0.361249475 container init 8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:06:20 compute-0 podman[269580]: 2026-01-27 09:06:20.248395117 +0000 UTC m=+0.368625516 container start 8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 09:06:20 compute-0 ceph-mon[74357]: pgmap v1291: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:20 compute-0 podman[269580]: 2026-01-27 09:06:20.341439984 +0000 UTC m=+0.461670383 container attach 8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:06:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:21 compute-0 serene_wozniak[269597]: {
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:         "osd_id": 0,
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:         "type": "bluestore"
Jan 27 09:06:21 compute-0 serene_wozniak[269597]:     }
Jan 27 09:06:21 compute-0 serene_wozniak[269597]: }
Jan 27 09:06:21 compute-0 systemd[1]: libpod-8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484.scope: Deactivated successfully.
Jan 27 09:06:21 compute-0 podman[269619]: 2026-01-27 09:06:21.168879763 +0000 UTC m=+0.021408842 container died 8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4f87783091ae8997a1b74a1cdb946d15e0ebb79fa61dbb1e7ac1c52560166ca-merged.mount: Deactivated successfully.
Jan 27 09:06:21 compute-0 podman[269619]: 2026-01-27 09:06:21.227792035 +0000 UTC m=+0.080321094 container remove 8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:06:21 compute-0 systemd[1]: libpod-conmon-8b2dc20c4fdfe0602b725e7bbdd20fa6f3ac6f070b15e02d6fcc1f1216e20484.scope: Deactivated successfully.
Jan 27 09:06:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:21.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:21 compute-0 sudo[269449]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:06:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:06:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:06:21 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:06:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6948a71c-b27d-4963-af2e-d5fce7ff1d6e does not exist
Jan 27 09:06:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5385f8f8-c036-4ff4-a6ca-d98e31500931 does not exist
Jan 27 09:06:21 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b53701cd-2d54-4166-815f-9f82f286b8a5 does not exist
Jan 27 09:06:21 compute-0 ceph-mon[74357]: pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:21 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:06:21 compute-0 sudo[269634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:21 compute-0 sudo[269634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:21 compute-0 sudo[269634]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:21 compute-0 sudo[269659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:06:21 compute-0 sudo[269659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:21 compute-0 sudo[269659]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:22 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:06:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:23.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:23 compute-0 ceph-mon[74357]: pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:23.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:06:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:25.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:25 compute-0 ceph-mon[74357]: pgmap v1294: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:25.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:26 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:26.675 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:06:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:27.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:27 compute-0 ceph-mon[74357]: pgmap v1295: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 27 09:06:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 27 09:06:28 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 27 09:06:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:29.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:29 compute-0 sudo[269688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:29 compute-0 sudo[269688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:29 compute-0 sudo[269688]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:29 compute-0 sudo[269713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:29 compute-0 sudo[269713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:29 compute-0 sudo[269713]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:29.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:30 compute-0 ceph-mon[74357]: pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:06:30 compute-0 ceph-mon[74357]: osdmap e159: 3 total, 3 up, 3 in
Jan 27 09:06:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 13 op/s
Jan 27 09:06:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:31 compute-0 ceph-mon[74357]: pgmap v1298: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 13 op/s
Jan 27 09:06:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:31.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:06:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:33 compute-0 podman[269740]: 2026-01-27 09:06:33.278487851 +0000 UTC m=+0.083755347 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 09:06:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:33.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:34 compute-0 ceph-mon[74357]: pgmap v1299: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:06:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:06:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:35.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:36 compute-0 ceph-mon[74357]: pgmap v1300: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:06:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:06:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 27 09:06:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 27 09:06:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:37 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 27 09:06:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:38 compute-0 ceph-mon[74357]: pgmap v1301: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:06:38 compute-0 ceph-mon[74357]: osdmap e160: 3 total, 3 up, 3 in
Jan 27 09:06:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 MiB/s wr, 20 op/s
Jan 27 09:06:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:06:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:06:39 compute-0 ceph-mon[74357]: pgmap v1303: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 MiB/s wr, 20 op/s
Jan 27 09:06:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 96 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 821 KiB/s wr, 25 op/s
Jan 27 09:06:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:06:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:41.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:06:42 compute-0 ceph-mon[74357]: pgmap v1304: 305 pgs: 305 active+clean; 96 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 821 KiB/s wr, 25 op/s
Jan 27 09:06:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 09:06:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:43 compute-0 ceph-mon[74357]: pgmap v1305: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 09:06:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 27 09:06:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 09:06:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 27 09:06:44 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 27 09:06:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:45 compute-0 ceph-mon[74357]: pgmap v1306: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 09:06:45 compute-0 ceph-mon[74357]: osdmap e161: 3 total, 3 up, 3 in
Jan 27 09:06:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 27 09:06:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:45.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 27 09:06:45 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 27 09:06:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 54 op/s
Jan 27 09:06:46 compute-0 ceph-mon[74357]: osdmap e162: 3 total, 3 up, 3 in
Jan 27 09:06:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:47.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:47 compute-0 ceph-mon[74357]: pgmap v1309: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 54 op/s
Jan 27 09:06:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Jan 27 09:06:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:49.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:49 compute-0 sudo[269768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:49 compute-0 sudo[269768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:49 compute-0 sudo[269768]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:49 compute-0 sudo[269799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:06:49 compute-0 sudo[269799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:06:49 compute-0 sudo[269799]: pam_unix(sudo:session): session closed for user root
Jan 27 09:06:49 compute-0 podman[269792]: 2026-01-27 09:06:49.640621588 +0000 UTC m=+0.073391764 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 27 09:06:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:49.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:50 compute-0 ceph-mon[74357]: pgmap v1310: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Jan 27 09:06:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 108 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 36 op/s
Jan 27 09:06:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 27 09:06:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:51.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 27 09:06:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 27 09:06:51 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1545845839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:06:51 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1545845839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:06:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:51.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:52 compute-0 ceph-mon[74357]: pgmap v1311: 305 pgs: 305 active+clean; 108 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 36 op/s
Jan 27 09:06:52 compute-0 ceph-mon[74357]: osdmap e163: 3 total, 3 up, 3 in
Jan 27 09:06:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 108 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.6 MiB/s wr, 49 op/s
Jan 27 09:06:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:53.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:53 compute-0 ceph-mon[74357]: pgmap v1313: 305 pgs: 305 active+clean; 108 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.6 MiB/s wr, 49 op/s
Jan 27 09:06:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:53.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:54.247 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:06:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:54.247 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:06:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:54.247 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:06:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 108 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.0 KiB/s wr, 22 op/s
Jan 27 09:06:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1299323153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:06:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1299323153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:06:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:06:55 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420507867' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:06:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:06:55 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420507867' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:06:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:06:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:55.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:06:55 compute-0 ceph-mon[74357]: pgmap v1314: 305 pgs: 305 active+clean; 108 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.0 KiB/s wr, 22 op/s
Jan 27 09:06:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3420507867' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:06:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3420507867' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:06:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:55.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:06:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:56.662 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:06:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:06:56.663 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:06:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:06:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:57.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:06:57 compute-0 ceph-mon[74357]: pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:06:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:57.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:06:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:06:59.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:06:59 compute-0 ceph-mon[74357]: pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:06:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/720574723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:06:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/720574723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:06:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:06:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:06:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:06:59.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 67 op/s
Jan 27 09:07:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 27 09:07:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 27 09:07:01 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 27 09:07:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:01.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:02 compute-0 ceph-mon[74357]: pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 67 op/s
Jan 27 09:07:02 compute-0 ceph-mon[74357]: osdmap e164: 3 total, 3 up, 3 in
Jan 27 09:07:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 56 op/s
Jan 27 09:07:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:03.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:03 compute-0 nova_compute[247671]: 2026-01-27 09:07:03.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:03 compute-0 nova_compute[247671]: 2026-01-27 09:07:03.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:07:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:04 compute-0 ceph-mon[74357]: pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 56 op/s
Jan 27 09:07:04 compute-0 podman[269851]: 2026-01-27 09:07:04.249716959 +0000 UTC m=+0.059973621 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:07:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 56 op/s
Jan 27 09:07:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:05.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:05 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:05.664 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:07:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:05.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:06 compute-0 ceph-mon[74357]: pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 56 op/s
Jan 27 09:07:06 compute-0 nova_compute[247671]: 2026-01-27 09:07:06.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:06 compute-0 nova_compute[247671]: 2026-01-27 09:07:06.450 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Jan 27 09:07:07 compute-0 ceph-mon[74357]: pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Jan 27 09:07:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:07.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:07 compute-0 nova_compute[247671]: 2026-01-27 09:07:07.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:08 compute-0 nova_compute[247671]: 2026-01-27 09:07:08.417 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:08 compute-0 nova_compute[247671]: 2026-01-27 09:07:08.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:08 compute-0 nova_compute[247671]: 2026-01-27 09:07:08.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:07:08 compute-0 nova_compute[247671]: 2026-01-27 09:07:08.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:07:08 compute-0 nova_compute[247671]: 2026-01-27 09:07:08.480 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:07:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Jan 27 09:07:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:09 compute-0 sudo[269873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:09 compute-0 sudo[269873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:09 compute-0 sudo[269873]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:09 compute-0 sudo[269898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:09 compute-0 sudo[269898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:09 compute-0 ceph-mon[74357]: pgmap v1322: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Jan 27 09:07:09 compute-0 sudo[269898]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:09.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:10 compute-0 nova_compute[247671]: 2026-01-27 09:07:10.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Jan 27 09:07:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:11.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:11 compute-0 ceph-mon[74357]: pgmap v1323: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Jan 27 09:07:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 439 B/s rd, 351 B/s wr, 1 op/s
Jan 27 09:07:13 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2315027942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:07:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:07:14 compute-0 ceph-mon[74357]: pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 439 B/s rd, 351 B/s wr, 1 op/s
Jan 27 09:07:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3114940437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:14 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3901413366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.494 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.494 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.495 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.495 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.495 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:07:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 09:07:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:07:14 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788223782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:14 compute-0 nova_compute[247671]: 2026-01-27 09:07:14.920 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.068 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.069 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5187MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.070 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.070 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:07:15
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.meta', '.mgr']
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:07:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3001595295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3788223782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:07:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:07:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.915 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.916 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:07:15 compute-0 nova_compute[247671]: 2026-01-27 09:07:15.916 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:07:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:15.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:16 compute-0 ceph-mon[74357]: pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 09:07:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 09:07:16 compute-0 nova_compute[247671]: 2026-01-27 09:07:16.684 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:07:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:07:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860800454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:17 compute-0 nova_compute[247671]: 2026-01-27 09:07:17.164 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:07:17 compute-0 nova_compute[247671]: 2026-01-27 09:07:17.169 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:07:17 compute-0 nova_compute[247671]: 2026-01-27 09:07:17.270 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:07:17 compute-0 nova_compute[247671]: 2026-01-27 09:07:17.271 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:07:17 compute-0 nova_compute[247671]: 2026-01-27 09:07:17.272 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:07:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:17.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:18 compute-0 ceph-mon[74357]: pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 09:07:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3860800454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:07:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:20 compute-0 ceph-mon[74357]: pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3443233772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:07:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3443233772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:07:20 compute-0 nova_compute[247671]: 2026-01-27 09:07:20.273 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:20 compute-0 nova_compute[247671]: 2026-01-27 09:07:20.274 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:07:20 compute-0 podman[269972]: 2026-01-27 09:07:20.343991971 +0000 UTC m=+0.156660197 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 09:07:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 11 op/s
Jan 27 09:07:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:21 compute-0 ceph-mon[74357]: pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 11 op/s
Jan 27 09:07:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:21 compute-0 sudo[269999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:21 compute-0 sudo[269999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:21 compute-0 sudo[269999]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:21 compute-0 sudo[270024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:07:21 compute-0 sudo[270024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:21 compute-0 sudo[270024]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:21 compute-0 sudo[270049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:21 compute-0 sudo[270049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:21 compute-0 sudo[270049]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:21 compute-0 sudo[270074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:07:21 compute-0 sudo[270074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:21.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:22 compute-0 sudo[270074]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:07:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:07:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:23 compute-0 ceph-mon[74357]: pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:23 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:23.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:07:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:07:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:07:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 58e10da0-eddc-4fc6-ab43-3308e539623e does not exist
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0c00c27c-a9c2-4c30-86b0-c96eeb65d378 does not exist
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f329ee6f-e3de-49f3-878f-35448043fe67 does not exist
Jan 27 09:07:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:07:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:07:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:07:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:07:24 compute-0 sudo[270131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:24 compute-0 sudo[270131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:24 compute-0 sudo[270131]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:24 compute-0 sudo[270156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:07:24 compute-0 sudo[270156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:24 compute-0 sudo[270156]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:24 compute-0 sudo[270181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:24 compute-0 sudo[270181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:24 compute-0 sudo[270181]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:24 compute-0 sudo[270206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:07:24 compute-0 sudo[270206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.580454794 +0000 UTC m=+0.039216516 container create ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldstine, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:07:24 compute-0 systemd[1]: Started libpod-conmon-ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde.scope.
Jan 27 09:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.562319672 +0000 UTC m=+0.021081444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.670721586 +0000 UTC m=+0.129483338 container init ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldstine, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.67928226 +0000 UTC m=+0.138043982 container start ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.683558865 +0000 UTC m=+0.142320607 container attach ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:07:24 compute-0 upbeat_goldstine[270289]: 167 167
Jan 27 09:07:24 compute-0 systemd[1]: libpod-ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde.scope: Deactivated successfully.
Jan 27 09:07:24 compute-0 conmon[270289]: conmon ff9cdcfd086f0f053318 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde.scope/container/memory.events
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.688596392 +0000 UTC m=+0.147358114 container died ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ced0d48f6d3b5d6a96306e31d921440764936b169d77ba9ea442c63945f7555-merged.mount: Deactivated successfully.
Jan 27 09:07:24 compute-0 podman[270272]: 2026-01-27 09:07:24.731061956 +0000 UTC m=+0.189823678 container remove ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 09:07:24 compute-0 systemd[1]: libpod-conmon-ff9cdcfd086f0f053318487bee0c3e352879f76820127093fbcb3c2040745fde.scope: Deactivated successfully.
Jan 27 09:07:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:07:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:07:24 compute-0 podman[270312]: 2026-01-27 09:07:24.88654319 +0000 UTC m=+0.044272784 container create c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_roentgen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 09:07:24 compute-0 systemd[1]: Started libpod-conmon-c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878.scope.
Jan 27 09:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72342b4e775c31379f342715261d0db8fa596a6e9c35ad7a1cd3d07ae7cc57c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72342b4e775c31379f342715261d0db8fa596a6e9c35ad7a1cd3d07ae7cc57c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72342b4e775c31379f342715261d0db8fa596a6e9c35ad7a1cd3d07ae7cc57c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72342b4e775c31379f342715261d0db8fa596a6e9c35ad7a1cd3d07ae7cc57c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72342b4e775c31379f342715261d0db8fa596a6e9c35ad7a1cd3d07ae7cc57c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:24 compute-0 podman[270312]: 2026-01-27 09:07:24.960936441 +0000 UTC m=+0.118666065 container init c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_roentgen, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:24 compute-0 podman[270312]: 2026-01-27 09:07:24.871229034 +0000 UTC m=+0.028958648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:07:24 compute-0 podman[270312]: 2026-01-27 09:07:24.966493392 +0000 UTC m=+0.124222986 container start c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_roentgen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:07:24 compute-0 podman[270312]: 2026-01-27 09:07:24.969605696 +0000 UTC m=+0.127335290 container attach c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 09:07:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:25.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:25 compute-0 wonderful_roentgen[270329]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:07:25 compute-0 wonderful_roentgen[270329]: --> relative data size: 1.0
Jan 27 09:07:25 compute-0 wonderful_roentgen[270329]: --> All data devices are unavailable
Jan 27 09:07:25 compute-0 systemd[1]: libpod-c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878.scope: Deactivated successfully.
Jan 27 09:07:25 compute-0 podman[270312]: 2026-01-27 09:07:25.750046959 +0000 UTC m=+0.907776573 container died c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_roentgen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-72342b4e775c31379f342715261d0db8fa596a6e9c35ad7a1cd3d07ae7cc57c2-merged.mount: Deactivated successfully.
Jan 27 09:07:25 compute-0 podman[270312]: 2026-01-27 09:07:25.795840903 +0000 UTC m=+0.953570497 container remove c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_roentgen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:25 compute-0 systemd[1]: libpod-conmon-c4824530f2973547a99cb78173b0c0e28b7a53c1c8e934d3b5cb23e796573878.scope: Deactivated successfully.
Jan 27 09:07:25 compute-0 sudo[270206]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:25 compute-0 sudo[270356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:25 compute-0 sudo[270356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:25 compute-0 sudo[270356]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:25.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:25 compute-0 sudo[270381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:07:25 compute-0 sudo[270381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:25 compute-0 sudo[270381]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:25 compute-0 sudo[270406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:25 compute-0 sudo[270406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:25 compute-0 sudo[270406]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:26 compute-0 sudo[270431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:07:26 compute-0 sudo[270431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:26 compute-0 ceph-mon[74357]: pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:07:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6834 writes, 29K keys, 6834 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6834 writes, 6834 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1567 writes, 6658 keys, 1567 commit groups, 1.0 writes per commit group, ingest: 10.65 MB, 0.02 MB/s
                                           Interval WAL: 1567 writes, 1567 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.0      0.62              0.11        17    0.036       0      0       0.0       0.0
                                             L6      1/0    8.73 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    135.1    110.7      1.22              0.35        16    0.076     79K   8938       0.0       0.0
                                            Sum      1/0    8.73 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     89.6     94.3      1.84              0.46        33    0.056     79K   8938       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.3     53.8     55.5      0.85              0.12         8    0.106     22K   2597       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    135.1    110.7      1.22              0.35        16    0.076     79K   8938       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     62.5      0.61              0.11        16    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.16 GB read, 0.07 MB/s read, 1.8 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f59eb431f0#2 capacity: 304.00 MB usage: 17.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000202 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1031,16.87 MB,5.55036%) FilterBlock(34,225.05 KB,0.0722935%) IndexBlock(34,415.92 KB,0.13361%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.385353879 +0000 UTC m=+0.039370260 container create 3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:07:26 compute-0 systemd[1]: Started libpod-conmon-3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f.scope.
Jan 27 09:07:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.459074872 +0000 UTC m=+0.113091263 container init 3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.370022722 +0000 UTC m=+0.024039123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.467121641 +0000 UTC m=+0.121138022 container start 3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.470331438 +0000 UTC m=+0.124347849 container attach 3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:07:26 compute-0 sleepy_franklin[270514]: 167 167
Jan 27 09:07:26 compute-0 systemd[1]: libpod-3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f.scope: Deactivated successfully.
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.476612748 +0000 UTC m=+0.130629129 container died 3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 27 09:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c80cc1782961b18d975580bbea9277c5d3c5b32400e7794c7932aef491e755b2-merged.mount: Deactivated successfully.
Jan 27 09:07:26 compute-0 podman[270498]: 2026-01-27 09:07:26.513053038 +0000 UTC m=+0.167069419 container remove 3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 09:07:26 compute-0 systemd[1]: libpod-conmon-3fbf048ce8186c688d86193ceb89be692f1eacfd26b28ebf12e9bfc0b669021f.scope: Deactivated successfully.
Jan 27 09:07:26 compute-0 podman[270539]: 2026-01-27 09:07:26.674509315 +0000 UTC m=+0.040503272 container create 970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_robinson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 09:07:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:26 compute-0 systemd[1]: Started libpod-conmon-970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73.scope.
Jan 27 09:07:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009fbde9b17e63fc8f1d330b7fbbced5c0b134dbc33646885142150f3164a416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009fbde9b17e63fc8f1d330b7fbbced5c0b134dbc33646885142150f3164a416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009fbde9b17e63fc8f1d330b7fbbced5c0b134dbc33646885142150f3164a416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009fbde9b17e63fc8f1d330b7fbbced5c0b134dbc33646885142150f3164a416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:26 compute-0 podman[270539]: 2026-01-27 09:07:26.65519181 +0000 UTC m=+0.021185797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:07:26 compute-0 podman[270539]: 2026-01-27 09:07:26.759218206 +0000 UTC m=+0.125212193 container init 970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_robinson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:26 compute-0 podman[270539]: 2026-01-27 09:07:26.765816395 +0000 UTC m=+0.131810352 container start 970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 09:07:26 compute-0 podman[270539]: 2026-01-27 09:07:26.77003554 +0000 UTC m=+0.136029507 container attach 970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:27.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]: {
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:     "0": [
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:         {
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "devices": [
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "/dev/loop3"
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             ],
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "lv_name": "ceph_lv0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "lv_size": "7511998464",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "name": "ceph_lv0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "tags": {
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.cluster_name": "ceph",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.crush_device_class": "",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.encrypted": "0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.osd_id": "0",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.type": "block",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:                 "ceph.vdo": "0"
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             },
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "type": "block",
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:             "vg_name": "ceph_vg0"
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:         }
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]:     ]
Jan 27 09:07:27 compute-0 dazzling_robinson[270556]: }
Jan 27 09:07:27 compute-0 systemd[1]: libpod-970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73.scope: Deactivated successfully.
Jan 27 09:07:27 compute-0 podman[270539]: 2026-01-27 09:07:27.56253276 +0000 UTC m=+0.928526757 container died 970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 09:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-009fbde9b17e63fc8f1d330b7fbbced5c0b134dbc33646885142150f3164a416-merged.mount: Deactivated successfully.
Jan 27 09:07:27 compute-0 podman[270539]: 2026-01-27 09:07:27.619916349 +0000 UTC m=+0.985910306 container remove 970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:07:27 compute-0 systemd[1]: libpod-conmon-970c11cb1939d6414ccf005bd8050c5d27c0723ceaed247c7dfbaf9054528f73.scope: Deactivated successfully.
Jan 27 09:07:27 compute-0 sudo[270431]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:27 compute-0 sudo[270580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:27 compute-0 sudo[270580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:27 compute-0 sudo[270580]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:27 compute-0 sudo[270605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:07:27 compute-0 sudo[270605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:27 compute-0 sudo[270605]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:27 compute-0 sudo[270630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:27 compute-0 sudo[270630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:27 compute-0 sudo[270630]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:27 compute-0 sudo[270655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:07:27 compute-0 sudo[270655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:28 compute-0 ceph-mon[74357]: pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.250932222 +0000 UTC m=+0.040778769 container create 4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 09:07:28 compute-0 systemd[1]: Started libpod-conmon-4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c.scope.
Jan 27 09:07:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.320011019 +0000 UTC m=+0.109857576 container init 4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.331185552 +0000 UTC m=+0.121032099 container start 4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.235447931 +0000 UTC m=+0.025294498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:07:28 compute-0 determined_brattain[270735]: 167 167
Jan 27 09:07:28 compute-0 systemd[1]: libpod-4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c.scope: Deactivated successfully.
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.334939174 +0000 UTC m=+0.124785751 container attach 4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.33549514 +0000 UTC m=+0.125341677 container died 4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 09:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7a8c702504a3b81c4bc09c1ef1404f76cb9d3337b043a5521c1601ac90704f-merged.mount: Deactivated successfully.
Jan 27 09:07:28 compute-0 podman[270719]: 2026-01-27 09:07:28.368111735 +0000 UTC m=+0.157958282 container remove 4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brattain, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:07:28 compute-0 systemd[1]: libpod-conmon-4cd43d6335ccc58ef5abe5a81a98d1c5ac24654c180514f90f69b4ed2df9d31c.scope: Deactivated successfully.
Jan 27 09:07:28 compute-0 podman[270759]: 2026-01-27 09:07:28.525011498 +0000 UTC m=+0.044684435 container create 24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:07:28 compute-0 systemd[1]: Started libpod-conmon-24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21.scope.
Jan 27 09:07:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93048b5d775fea1864c0fb13036b29e962b7332fa8fb7b46fbf1171f33c21661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93048b5d775fea1864c0fb13036b29e962b7332fa8fb7b46fbf1171f33c21661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93048b5d775fea1864c0fb13036b29e962b7332fa8fb7b46fbf1171f33c21661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93048b5d775fea1864c0fb13036b29e962b7332fa8fb7b46fbf1171f33c21661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:28 compute-0 podman[270759]: 2026-01-27 09:07:28.50302946 +0000 UTC m=+0.022702357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:07:28 compute-0 podman[270759]: 2026-01-27 09:07:28.601819555 +0000 UTC m=+0.121492452 container init 24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shannon, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:07:28 compute-0 podman[270759]: 2026-01-27 09:07:28.607388786 +0000 UTC m=+0.127061663 container start 24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:07:28 compute-0 podman[270759]: 2026-01-27 09:07:28.610397058 +0000 UTC m=+0.130069935 container attach 24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:07:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:29.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:29 compute-0 pensive_shannon[270776]: {
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:         "osd_id": 0,
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:         "type": "bluestore"
Jan 27 09:07:29 compute-0 pensive_shannon[270776]:     }
Jan 27 09:07:29 compute-0 pensive_shannon[270776]: }
Jan 27 09:07:29 compute-0 systemd[1]: libpod-24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21.scope: Deactivated successfully.
Jan 27 09:07:29 compute-0 podman[270797]: 2026-01-27 09:07:29.457917052 +0000 UTC m=+0.020891108 container died 24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 09:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-93048b5d775fea1864c0fb13036b29e962b7332fa8fb7b46fbf1171f33c21661-merged.mount: Deactivated successfully.
Jan 27 09:07:29 compute-0 podman[270797]: 2026-01-27 09:07:29.506589605 +0000 UTC m=+0.069563631 container remove 24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:29 compute-0 systemd[1]: libpod-conmon-24290f12620b3230c87c9004e514cfcd37043e1504d42eda65337984cf630c21.scope: Deactivated successfully.
Jan 27 09:07:29 compute-0 sudo[270655]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:07:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:07:29 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8241da2c-6f68-4283-a5b2-4fccc7f6caca does not exist
Jan 27 09:07:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c6fddd23-c08b-4b78-b950-02654225be09 does not exist
Jan 27 09:07:29 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 4255c4e8-324e-4ede-af05-bc6f1b1b4160 does not exist
Jan 27 09:07:29 compute-0 sudo[270812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:29 compute-0 sudo[270812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:29 compute-0 sudo[270812]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:29 compute-0 sudo[270837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:07:29 compute-0 sudo[270837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:29 compute-0 sudo[270837]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:29 compute-0 sudo[270862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:29 compute-0 sudo[270862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:29 compute-0 sudo[270862]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:29 compute-0 sudo[270887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:29 compute-0 sudo[270887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:29 compute-0 sudo[270887]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:29.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:30 compute-0 ceph-mon[74357]: pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:07:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:31.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:32 compute-0 ceph-mon[74357]: pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:07:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 27 09:07:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:34 compute-0 ceph-mon[74357]: pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 27 09:07:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:35 compute-0 podman[270915]: 2026-01-27 09:07:35.236558334 +0000 UTC m=+0.050867992 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 09:07:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:35.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:36 compute-0 ceph-mon[74357]: pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:37.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:38 compute-0 ceph-mon[74357]: pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:39 compute-0 ceph-mon[74357]: pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:39.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:07:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:07:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:41 compute-0 ceph-mon[74357]: pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:43.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:43 compute-0 ceph-mon[74357]: pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:45.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:46 compute-0 ceph-mon[74357]: pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:46 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:46.653 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:07:46 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:46.654 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:07:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:47.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:48 compute-0 ceph-mon[74357]: pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:49.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:49 compute-0 sudo[270943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:49 compute-0 sudo[270943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:49 compute-0 sudo[270943]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:49 compute-0 sudo[270968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:07:49 compute-0 sudo[270968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:07:50 compute-0 sudo[270968]: pam_unix(sudo:session): session closed for user root
Jan 27 09:07:50 compute-0 ceph-mon[74357]: pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:51 compute-0 podman[270994]: 2026-01-27 09:07:51.295644751 +0000 UTC m=+0.106830743 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:07:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:51.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:51 compute-0 ceph-mon[74357]: pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:51.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:53.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:53 compute-0 ceph-mon[74357]: pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:54.247 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:07:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:54.248 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:07:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:54.248 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:07:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:55 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:07:55.656 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:07:55 compute-0 ceph-mon[74357]: pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:55.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:57.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:57 compute-0 ceph-mon[74357]: pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:07:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:57.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:07:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:07:59.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:07:59 compute-0 ceph-mon[74357]: pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:07:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/794909846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:07:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/794909846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:07:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:07:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:07:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:07:59.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:01 compute-0 ceph-mon[74357]: pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:01.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:03.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:03 compute-0 nova_compute[247671]: 2026-01-27 09:08:03.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:03 compute-0 nova_compute[247671]: 2026-01-27 09:08:03.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:08:03 compute-0 ceph-mon[74357]: pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:05.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:05 compute-0 ceph-mon[74357]: pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:06 compute-0 podman[271028]: 2026-01-27 09:08:06.246504651 +0000 UTC m=+0.047673476 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:08:06 compute-0 nova_compute[247671]: 2026-01-27 09:08:06.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:07.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:07 compute-0 ceph-mon[74357]: pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:07.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:08 compute-0 nova_compute[247671]: 2026-01-27 09:08:08.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:09.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:10.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:10 compute-0 ceph-mon[74357]: pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:10 compute-0 sudo[271047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:10 compute-0 sudo[271047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:10 compute-0 sudo[271047]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:10 compute-0 sudo[271072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:10 compute-0 sudo[271072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:10 compute-0 sudo[271072]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:10 compute-0 nova_compute[247671]: 2026-01-27 09:08:10.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:10 compute-0 nova_compute[247671]: 2026-01-27 09:08:10.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:10 compute-0 nova_compute[247671]: 2026-01-27 09:08:10.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:08:10 compute-0 nova_compute[247671]: 2026-01-27 09:08:10.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:08:10 compute-0 nova_compute[247671]: 2026-01-27 09:08:10.482 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:08:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:11.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:12.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:12 compute-0 ceph-mon[74357]: pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:12 compute-0 nova_compute[247671]: 2026-01-27 09:08:12.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:13.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:14 compute-0 ceph-mon[74357]: pgmap v1354: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:14 compute-0 nova_compute[247671]: 2026-01-27 09:08:14.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:14 compute-0 nova_compute[247671]: 2026-01-27 09:08:14.624 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:08:14 compute-0 nova_compute[247671]: 2026-01-27 09:08:14.624 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:08:14 compute-0 nova_compute[247671]: 2026-01-27 09:08:14.625 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:08:14 compute-0 nova_compute[247671]: 2026-01-27 09:08:14.625 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:08:14 compute-0 nova_compute[247671]: 2026-01-27 09:08:14.625 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:08:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:08:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693901752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:15 compute-0 nova_compute[247671]: 2026-01-27 09:08:15.041 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:08:15
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:08:15 compute-0 nova_compute[247671]: 2026-01-27 09:08:15.202 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:08:15 compute-0 nova_compute[247671]: 2026-01-27 09:08:15.203 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:08:15 compute-0 nova_compute[247671]: 2026-01-27 09:08:15.204 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:08:15 compute-0 nova_compute[247671]: 2026-01-27 09:08:15.204 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:08:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1481439329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3693901752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1582618549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/770511293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:08:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:08:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:08:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:08:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:08:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:08:16 compute-0 ceph-mon[74357]: pgmap v1355: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.023 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.023 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.024 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.070 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:08:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:17.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:17 compute-0 ceph-mon[74357]: pgmap v1356: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:08:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295132981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.555 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.562 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.770 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.774 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:08:17 compute-0 nova_compute[247671]: 2026-01-27 09:08:17.774 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:08:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:18.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/950578345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3295132981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:08:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:19.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:19 compute-0 ceph-mon[74357]: pgmap v1357: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:20.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 85 B/s wr, 5 op/s
Jan 27 09:08:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:21 compute-0 ceph-mon[74357]: pgmap v1358: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 85 B/s wr, 5 op/s
Jan 27 09:08:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:22.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:22 compute-0 podman[271147]: 2026-01-27 09:08:22.286826516 +0000 UTC m=+0.099215066 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:08:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 383 KiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:08:22 compute-0 nova_compute[247671]: 2026-01-27 09:08:22.776 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:22 compute-0 nova_compute[247671]: 2026-01-27 09:08:22.776 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:08:23 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 27 09:08:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:23 compute-0 ceph-mon[74357]: pgmap v1359: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 383 KiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:08:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:24.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:08:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 383 KiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:08:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:25.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:25 compute-0 ceph-mon[74357]: pgmap v1360: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 383 KiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:08:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:26.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 27 09:08:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:27.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:27 compute-0 ceph-mon[74357]: pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 27 09:08:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:28.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 27 09:08:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:29.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:29 compute-0 ceph-mon[74357]: pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 27 09:08:29 compute-0 sudo[271178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:29 compute-0 sudo[271178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:29 compute-0 sudo[271178]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:30.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:30 compute-0 sudo[271203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:08:30 compute-0 sudo[271203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271203]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 sudo[271228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:30 compute-0 sudo[271228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271228]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 sudo[271253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:08:30 compute-0 sudo[271253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:30 compute-0 sudo[271278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271278]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 sudo[271303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:30 compute-0 sudo[271303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271303]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 sudo[271253]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 56 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 384 KiB/s wr, 32 op/s
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:08:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:08:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:08:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:08:30 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6640aca5-5be7-46da-8454-585964fd1e10 does not exist
Jan 27 09:08:30 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 41f1e083-bb2d-43c8-a46f-80474f2ec7a0 does not exist
Jan 27 09:08:30 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 066ba229-c538-48b2-97c7-ef6b03280d91 does not exist
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:08:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:08:30 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:08:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:08:30 compute-0 sudo[271361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:30 compute-0 sudo[271361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271361]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 sudo[271386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:08:30 compute-0 sudo[271386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271386]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:30 compute-0 sudo[271411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:30 compute-0 sudo[271411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 sudo[271411]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:30 compute-0 sudo[271436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:08:30 compute-0 sudo[271436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:08:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:08:30 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.270075868 +0000 UTC m=+0.036929271 container create 77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_johnson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:08:31 compute-0 systemd[1]: Started libpod-conmon-77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6.scope.
Jan 27 09:08:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.33666196 +0000 UTC m=+0.103515363 container init 77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.344078843 +0000 UTC m=+0.110932246 container start 77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.346474308 +0000 UTC m=+0.113327711 container attach 77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_johnson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.254156753 +0000 UTC m=+0.021010176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:08:31 compute-0 stupefied_johnson[271519]: 167 167
Jan 27 09:08:31 compute-0 systemd[1]: libpod-77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6.scope: Deactivated successfully.
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.350697944 +0000 UTC m=+0.117551347 container died 77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-11a6845408619819d2c7d080049f1e2d1b7a0ec079e72539483cde1de0ae4d48-merged.mount: Deactivated successfully.
Jan 27 09:08:31 compute-0 podman[271502]: 2026-01-27 09:08:31.38161473 +0000 UTC m=+0.148468133 container remove 77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 27 09:08:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:31.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:31 compute-0 systemd[1]: libpod-conmon-77c2f5bdb0bec1ce39fd404a8fc8d2dfd6b6999016f22a50558ed54a3da949b6.scope: Deactivated successfully.
Jan 27 09:08:31 compute-0 podman[271545]: 2026-01-27 09:08:31.522230586 +0000 UTC m=+0.034852334 container create 52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 09:08:31 compute-0 systemd[1]: Started libpod-conmon-52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c.scope.
Jan 27 09:08:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbc9246f2449086da1745642588e7500270daeb9cc9b1510f011fca06db6a63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbc9246f2449086da1745642588e7500270daeb9cc9b1510f011fca06db6a63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbc9246f2449086da1745642588e7500270daeb9cc9b1510f011fca06db6a63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbc9246f2449086da1745642588e7500270daeb9cc9b1510f011fca06db6a63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbc9246f2449086da1745642588e7500270daeb9cc9b1510f011fca06db6a63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:31 compute-0 podman[271545]: 2026-01-27 09:08:31.597083225 +0000 UTC m=+0.109704993 container init 52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 09:08:31 compute-0 podman[271545]: 2026-01-27 09:08:31.507931145 +0000 UTC m=+0.020552913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:08:31 compute-0 podman[271545]: 2026-01-27 09:08:31.605290929 +0000 UTC m=+0.117912677 container start 52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:31 compute-0 podman[271545]: 2026-01-27 09:08:31.608195289 +0000 UTC m=+0.120817097 container attach 52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:08:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:32.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:32 compute-0 ceph-mon[74357]: pgmap v1363: 305 pgs: 305 active+clean; 56 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 384 KiB/s wr, 32 op/s
Jan 27 09:08:32 compute-0 xenodochial_keldysh[271561]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:08:32 compute-0 xenodochial_keldysh[271561]: --> relative data size: 1.0
Jan 27 09:08:32 compute-0 xenodochial_keldysh[271561]: --> All data devices are unavailable
Jan 27 09:08:32 compute-0 systemd[1]: libpod-52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c.scope: Deactivated successfully.
Jan 27 09:08:32 compute-0 podman[271545]: 2026-01-27 09:08:32.444931771 +0000 UTC m=+0.957553519 container died 52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 09:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cbc9246f2449086da1745642588e7500270daeb9cc9b1510f011fca06db6a63-merged.mount: Deactivated successfully.
Jan 27 09:08:32 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:08:32.491 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:08:32 compute-0 podman[271545]: 2026-01-27 09:08:32.493491029 +0000 UTC m=+1.006112777 container remove 52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:08:32 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:08:32.493 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:08:32 compute-0 systemd[1]: libpod-conmon-52d661b48b6427ab0c9274c6e6f2b4cf95945fe771a5b6bf25e3de6177a29f3c.scope: Deactivated successfully.
Jan 27 09:08:32 compute-0 sudo[271436]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:32 compute-0 sudo[271588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:32 compute-0 sudo[271588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:32 compute-0 sudo[271588]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:32 compute-0 sudo[271613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:08:32 compute-0 sudo[271613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:32 compute-0 sudo[271613]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:32 compute-0 sudo[271638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:32 compute-0 sudo[271638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:32 compute-0 sudo[271638]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 27 09:08:32 compute-0 sudo[271663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:08:32 compute-0 sudo[271663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:33.006431223 +0000 UTC m=+0.036082298 container create fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 27 09:08:33 compute-0 systemd[1]: Started libpod-conmon-fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9.scope.
Jan 27 09:08:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:33.070543537 +0000 UTC m=+0.100194612 container init fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackwell, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:33.076412267 +0000 UTC m=+0.106063342 container start fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackwell, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:33.080618193 +0000 UTC m=+0.110269278 container attach fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:08:33 compute-0 determined_blackwell[271744]: 167 167
Jan 27 09:08:33 compute-0 systemd[1]: libpod-fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9.scope: Deactivated successfully.
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:33.081672781 +0000 UTC m=+0.111323856 container died fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:32.991747811 +0000 UTC m=+0.021398886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-459a1160e09331611a1564842324d9509e52bf43224242d6530aa549eaab09c4-merged.mount: Deactivated successfully.
Jan 27 09:08:33 compute-0 podman[271728]: 2026-01-27 09:08:33.107628711 +0000 UTC m=+0.137279786 container remove fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:08:33 compute-0 systemd[1]: libpod-conmon-fc30fe480d890077c9f10ccea4c47a3dc3d2db9779df2736a4850f83b8e250b9.scope: Deactivated successfully.
Jan 27 09:08:33 compute-0 podman[271767]: 2026-01-27 09:08:33.252022632 +0000 UTC m=+0.037817646 container create 8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:08:33 compute-0 systemd[1]: Started libpod-conmon-8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382.scope.
Jan 27 09:08:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08dc77db178c11e84222d43fb1b27a2bf2b36ffdbd0728d1af411facb5665688/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08dc77db178c11e84222d43fb1b27a2bf2b36ffdbd0728d1af411facb5665688/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08dc77db178c11e84222d43fb1b27a2bf2b36ffdbd0728d1af411facb5665688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08dc77db178c11e84222d43fb1b27a2bf2b36ffdbd0728d1af411facb5665688/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:33 compute-0 podman[271767]: 2026-01-27 09:08:33.234243065 +0000 UTC m=+0.020038049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:08:33 compute-0 podman[271767]: 2026-01-27 09:08:33.332964627 +0000 UTC m=+0.118759631 container init 8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:08:33 compute-0 podman[271767]: 2026-01-27 09:08:33.344189354 +0000 UTC m=+0.129984338 container start 8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:08:33 compute-0 podman[271767]: 2026-01-27 09:08:33.348011568 +0000 UTC m=+0.133806552 container attach 8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:08:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:33.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:34.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:34 compute-0 cool_saha[271784]: {
Jan 27 09:08:34 compute-0 cool_saha[271784]:     "0": [
Jan 27 09:08:34 compute-0 cool_saha[271784]:         {
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "devices": [
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "/dev/loop3"
Jan 27 09:08:34 compute-0 cool_saha[271784]:             ],
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "lv_name": "ceph_lv0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "lv_size": "7511998464",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "name": "ceph_lv0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "tags": {
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.cluster_name": "ceph",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.crush_device_class": "",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.encrypted": "0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.osd_id": "0",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.type": "block",
Jan 27 09:08:34 compute-0 cool_saha[271784]:                 "ceph.vdo": "0"
Jan 27 09:08:34 compute-0 cool_saha[271784]:             },
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "type": "block",
Jan 27 09:08:34 compute-0 cool_saha[271784]:             "vg_name": "ceph_vg0"
Jan 27 09:08:34 compute-0 cool_saha[271784]:         }
Jan 27 09:08:34 compute-0 cool_saha[271784]:     ]
Jan 27 09:08:34 compute-0 cool_saha[271784]: }
Jan 27 09:08:34 compute-0 ceph-mon[74357]: pgmap v1364: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 27 09:08:34 compute-0 systemd[1]: libpod-8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382.scope: Deactivated successfully.
Jan 27 09:08:34 compute-0 podman[271767]: 2026-01-27 09:08:34.097917455 +0000 UTC m=+0.883712449 container died 8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-08dc77db178c11e84222d43fb1b27a2bf2b36ffdbd0728d1af411facb5665688-merged.mount: Deactivated successfully.
Jan 27 09:08:34 compute-0 podman[271767]: 2026-01-27 09:08:34.1496656 +0000 UTC m=+0.935460594 container remove 8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:08:34 compute-0 systemd[1]: libpod-conmon-8cda7143b55299e02227f25d41fe239880604d54656aa4af1319b5bbc950f382.scope: Deactivated successfully.
Jan 27 09:08:34 compute-0 sudo[271663]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:34 compute-0 sudo[271804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:34 compute-0 sudo[271804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:34 compute-0 sudo[271804]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:34 compute-0 sudo[271829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:08:34 compute-0 sudo[271829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:34 compute-0 sudo[271829]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:34 compute-0 sudo[271854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:34 compute-0 sudo[271854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:34 compute-0 sudo[271854]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:34 compute-0 sudo[271879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:08:34 compute-0 sudo[271879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.713315591 +0000 UTC m=+0.036110739 container create 6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:34 compute-0 systemd[1]: Started libpod-conmon-6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1.scope.
Jan 27 09:08:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.790034831 +0000 UTC m=+0.112830009 container init 6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ramanujan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.698443935 +0000 UTC m=+0.021239093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.797662249 +0000 UTC m=+0.120457397 container start 6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.801867984 +0000 UTC m=+0.124663132 container attach 6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ramanujan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:34 compute-0 infallible_ramanujan[271961]: 167 167
Jan 27 09:08:34 compute-0 systemd[1]: libpod-6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1.scope: Deactivated successfully.
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.804444695 +0000 UTC m=+0.127239843 container died 6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ramanujan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b104df8fc4c8c40de483ef4851c2704ebcbc47f9ea2d7c350252d3f9826d3114-merged.mount: Deactivated successfully.
Jan 27 09:08:34 compute-0 podman[271945]: 2026-01-27 09:08:34.84521758 +0000 UTC m=+0.168012728 container remove 6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:08:34 compute-0 systemd[1]: libpod-conmon-6e568a6d96e13a1cb74a616534e0494f67892946999018dabb1c307d2e741fc1.scope: Deactivated successfully.
Jan 27 09:08:34 compute-0 podman[271983]: 2026-01-27 09:08:34.987979506 +0000 UTC m=+0.036995673 container create ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:08:35 compute-0 systemd[1]: Started libpod-conmon-ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8.scope.
Jan 27 09:08:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95c3a9d5c78f1b31f1b27eaee26032179db4371cc57faa4800a7f71538fa5c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95c3a9d5c78f1b31f1b27eaee26032179db4371cc57faa4800a7f71538fa5c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95c3a9d5c78f1b31f1b27eaee26032179db4371cc57faa4800a7f71538fa5c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95c3a9d5c78f1b31f1b27eaee26032179db4371cc57faa4800a7f71538fa5c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:35 compute-0 podman[271983]: 2026-01-27 09:08:35.067003508 +0000 UTC m=+0.116019695 container init ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:08:35 compute-0 podman[271983]: 2026-01-27 09:08:34.973263923 +0000 UTC m=+0.022280100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:08:35 compute-0 podman[271983]: 2026-01-27 09:08:35.073225028 +0000 UTC m=+0.122241195 container start ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 09:08:35 compute-0 podman[271983]: 2026-01-27 09:08:35.075820189 +0000 UTC m=+0.124836356 container attach ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:08:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:35.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:35 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:08:35.494 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:08:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:35 compute-0 festive_galileo[272000]: {
Jan 27 09:08:35 compute-0 festive_galileo[272000]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:08:35 compute-0 festive_galileo[272000]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:08:35 compute-0 festive_galileo[272000]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:08:35 compute-0 festive_galileo[272000]:         "osd_id": 0,
Jan 27 09:08:35 compute-0 festive_galileo[272000]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:08:35 compute-0 festive_galileo[272000]:         "type": "bluestore"
Jan 27 09:08:35 compute-0 festive_galileo[272000]:     }
Jan 27 09:08:35 compute-0 festive_galileo[272000]: }
Jan 27 09:08:35 compute-0 systemd[1]: libpod-ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8.scope: Deactivated successfully.
Jan 27 09:08:35 compute-0 podman[271983]: 2026-01-27 09:08:35.947328133 +0000 UTC m=+0.996344300 container died ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 09:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d95c3a9d5c78f1b31f1b27eaee26032179db4371cc57faa4800a7f71538fa5c4-merged.mount: Deactivated successfully.
Jan 27 09:08:35 compute-0 podman[271983]: 2026-01-27 09:08:35.996727814 +0000 UTC m=+1.045743981 container remove ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 09:08:36 compute-0 systemd[1]: libpod-conmon-ef34be3e23b8781e10c3d1244e28aec77a86c46dcebc0ef3d82366df5273f7b8.scope: Deactivated successfully.
Jan 27 09:08:36 compute-0 sudo[271879]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:08:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:36.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:08:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:08:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:08:36 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 2ed0c26c-9b01-4ef3-ae27-25517c9d33dd does not exist
Jan 27 09:08:36 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d77d296c-cabf-4905-8b1a-dbbd3c159ade does not exist
Jan 27 09:08:36 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 06b0f117-ea8a-4794-a429-1e627ad5c253 does not exist
Jan 27 09:08:36 compute-0 ceph-mon[74357]: pgmap v1365: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 09:08:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:08:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:08:36 compute-0 sudo[272033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:36 compute-0 sudo[272033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:36 compute-0 sudo[272033]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:36 compute-0 sudo[272058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:08:36 compute-0 sudo[272058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:36 compute-0 sudo[272058]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 09:08:37 compute-0 podman[272084]: 2026-01-27 09:08:37.254773164 +0000 UTC m=+0.064947119 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:08:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:37.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:38.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:38 compute-0 ceph-mon[74357]: pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 09:08:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:08:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5d8f6f0 =====
Jan 27 09:08:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:40.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5d8f6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:40 compute-0 radosgw[92542]: beast: 0x7f84d5d8f6f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:40.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:40 compute-0 ceph-mon[74357]: pgmap v1367: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:08:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:08:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:41 compute-0 ceph-mon[74357]: pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:08:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:42.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:42.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.4 MiB/s wr, 12 op/s
Jan 27 09:08:43 compute-0 ceph-mon[74357]: pgmap v1369: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.4 MiB/s wr, 12 op/s
Jan 27 09:08:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:08:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:44.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:08:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:45 compute-0 ceph-mon[74357]: pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:08:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:08:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:08:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:08:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:47 compute-0 ceph-mon[74357]: pgmap v1371: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:48.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:08:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:08:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:49 compute-0 ceph-mon[74357]: pgmap v1372: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:50.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:50.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:50 compute-0 sudo[272108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:50 compute-0 sudo[272108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:50 compute-0 sudo[272108]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:50 compute-0 sudo[272133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:08:50 compute-0 sudo[272133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:08:50 compute-0 sudo[272133]: pam_unix(sudo:session): session closed for user root
Jan 27 09:08:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:51 compute-0 ceph-mon[74357]: pgmap v1373: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:52.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:53 compute-0 podman[272160]: 2026-01-27 09:08:53.303960194 +0000 UTC m=+0.119720616 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:08:54 compute-0 ceph-mon[74357]: pgmap v1374: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:08:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:54.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:08:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:54.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:08:54.248 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:08:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:08:54.249 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:08:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:08:54.249 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:08:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:55 compute-0 ceph-mon[74357]: pgmap v1375: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:08:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:56.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:08:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:56.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:57 compute-0 ceph-mon[74357]: pgmap v1376: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:08:58.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:08:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:08:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:08:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:08:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:59 compute-0 ceph-mon[74357]: pgmap v1377: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:08:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2823542276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:08:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2823542276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:09:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:01 compute-0 ceph-mon[74357]: pgmap v1378: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:09:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 27 09:09:03 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 27 09:09:03 compute-0 ceph-mon[74357]: pgmap v1379: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 27 09:09:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:04.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:04.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:04 compute-0 nova_compute[247671]: 2026-01-27 09:09:04.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:04 compute-0 nova_compute[247671]: 2026-01-27 09:09:04.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 09:09:04 compute-0 nova_compute[247671]: 2026-01-27 09:09:04.444 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 09:09:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 27 09:09:05 compute-0 nova_compute[247671]: 2026-01-27 09:09:05.445 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:05 compute-0 nova_compute[247671]: 2026-01-27 09:09:05.445 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:09:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:06 compute-0 ceph-mon[74357]: pgmap v1380: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 27 09:09:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:06.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:06.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 27 09:09:07 compute-0 nova_compute[247671]: 2026-01-27 09:09:07.419 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:07 compute-0 nova_compute[247671]: 2026-01-27 09:09:07.747 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:08 compute-0 ceph-mon[74357]: pgmap v1381: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 27 09:09:08 compute-0 podman[272195]: 2026-01-27 09:09:08.233654534 +0000 UTC m=+0.054123211 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 27 09:09:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:08.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:08.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:08 compute-0 nova_compute[247671]: 2026-01-27 09:09:08.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 27 09:09:10 compute-0 ceph-mon[74357]: pgmap v1382: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 27 09:09:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:10.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:10 compute-0 nova_compute[247671]: 2026-01-27 09:09:10.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:10 compute-0 nova_compute[247671]: 2026-01-27 09:09:10.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 09:09:10 compute-0 sudo[272218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:10 compute-0 sudo[272218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:10 compute-0 sudo[272218]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:10 compute-0 sudo[272244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:10 compute-0 sudo[272244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:10 compute-0 sudo[272244]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 122 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 39 op/s
Jan 27 09:09:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:11 compute-0 nova_compute[247671]: 2026-01-27 09:09:11.470 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:11 compute-0 ceph-mon[74357]: pgmap v1383: 305 pgs: 305 active+clean; 122 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 39 op/s
Jan 27 09:09:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:12.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:12 compute-0 nova_compute[247671]: 2026-01-27 09:09:12.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:12 compute-0 nova_compute[247671]: 2026-01-27 09:09:12.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:09:12 compute-0 nova_compute[247671]: 2026-01-27 09:09:12.424 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:09:12 compute-0 nova_compute[247671]: 2026-01-27 09:09:12.477 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:09:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:09:13 compute-0 nova_compute[247671]: 2026-01-27 09:09:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:13 compute-0 ceph-mon[74357]: pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:09:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:14.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:14 compute-0 nova_compute[247671]: 2026-01-27 09:09:14.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:14 compute-0 nova_compute[247671]: 2026-01-27 09:09:14.616 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:09:14 compute-0 nova_compute[247671]: 2026-01-27 09:09:14.616 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:09:14 compute-0 nova_compute[247671]: 2026-01-27 09:09:14.617 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:09:14 compute-0 nova_compute[247671]: 2026-01-27 09:09:14.617 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:09:14 compute-0 nova_compute[247671]: 2026-01-27 09:09:14.617 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:09:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:09:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:09:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026614043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.046 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:09:15
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.207 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.208 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.208 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.209 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:09:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.814 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.814 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.815 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:09:15 compute-0 ceph-mon[74357]: pgmap v1385: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:09:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2026614043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:15 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/960002686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:15 compute-0 nova_compute[247671]: 2026-01-27 09:09:15.892 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:09:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:09:16 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2963555839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:16 compute-0 nova_compute[247671]: 2026-01-27 09:09:16.323 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:09:16 compute-0 nova_compute[247671]: 2026-01-27 09:09:16.328 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:09:16 compute-0 nova_compute[247671]: 2026-01-27 09:09:16.346 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:09:16 compute-0 nova_compute[247671]: 2026-01-27 09:09:16.347 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:09:16 compute-0 nova_compute[247671]: 2026-01-27 09:09:16.347 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:09:16 compute-0 nova_compute[247671]: 2026-01-27 09:09:16.348 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:09:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/381241353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2963555839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:17 compute-0 ceph-mon[74357]: pgmap v1386: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:09:17 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2980081654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:17 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3981054535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:09:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:09:19 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:19.253 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:09:19 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:19.254 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:09:19 compute-0 ceph-mon[74357]: pgmap v1387: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:09:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:09:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:21 compute-0 ceph-mon[74357]: pgmap v1388: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:09:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:22 compute-0 nova_compute[247671]: 2026-01-27 09:09:22.363 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:22 compute-0 nova_compute[247671]: 2026-01-27 09:09:22.364 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 410 KiB/s wr, 5 op/s
Jan 27 09:09:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2570048198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2570048198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:24 compute-0 podman[272319]: 2026-01-27 09:09:24.260387546 +0000 UTC m=+0.076590326 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:09:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:24.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:24 compute-0 ceph-mon[74357]: pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 410 KiB/s wr, 5 op/s
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001977298238879583 of space, bias 1.0, pg target 0.5931894716638749 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:09:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 170 B/s wr, 3 op/s
Jan 27 09:09:25 compute-0 ceph-mon[74357]: pgmap v1390: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 170 B/s wr, 3 op/s
Jan 27 09:09:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:26 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:26.256 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:09:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:26.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3018818893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3018818893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 49 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 682 B/s wr, 26 op/s
Jan 27 09:09:27 compute-0 ceph-mon[74357]: pgmap v1391: 305 pgs: 305 active+clean; 49 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 682 B/s wr, 26 op/s
Jan 27 09:09:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:28.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 49 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:29 compute-0 ceph-mon[74357]: pgmap v1392: 305 pgs: 305 active+clean; 49 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:30.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:09:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 33K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2788 syncs, 3.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2012 writes, 4405 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 1.96 MB, 0.00 MB/s
                                           Interval WAL: 2012 writes, 898 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:09:30 compute-0 sudo[272349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:30 compute-0 sudo[272349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:30 compute-0 sudo[272349]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:30 compute-0 sudo[272374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:30 compute-0 sudo[272374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:30 compute-0 sudo[272374]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 41 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:31 compute-0 ceph-mon[74357]: pgmap v1393: 305 pgs: 305 active+clean; 41 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:32.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:32.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:33 compute-0 ceph-mon[74357]: pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:34.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:36 compute-0 ceph-mon[74357]: pgmap v1395: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 23 op/s
Jan 27 09:09:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:36.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Check health
Jan 27 09:09:36 compute-0 sudo[272401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:36 compute-0 sudo[272401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:36 compute-0 sudo[272401]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:36 compute-0 sudo[272427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:09:36 compute-0 sudo[272427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:36 compute-0 sudo[272427]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:36 compute-0 sudo[272452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:36 compute-0 sudo[272452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:36 compute-0 sudo[272452]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:36 compute-0 sudo[272477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 27 09:09:36 compute-0 sudo[272477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 30 op/s
Jan 27 09:09:36 compute-0 sudo[272477]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:09:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:09:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:36 compute-0 sudo[272523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:36 compute-0 sudo[272523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:36 compute-0 sudo[272523]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:36 compute-0 sudo[272548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:09:36 compute-0 sudo[272548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:36 compute-0 sudo[272548]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:37 compute-0 sudo[272573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:37 compute-0 sudo[272573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:37 compute-0 sudo[272573]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:37 compute-0 sudo[272598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:09:37 compute-0 sudo[272598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:37 compute-0 sudo[272598]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:09:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:09:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:09:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 863f9ffc-59fe-4b30-ac4a-462a85a4ea12 does not exist
Jan 27 09:09:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 03fed5f1-4b64-48e9-a4a0-2869f7ea8ed9 does not exist
Jan 27 09:09:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 18122713-ec70-4efd-96fd-d718c66e93b8 does not exist
Jan 27 09:09:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:09:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:09:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:09:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:09:37 compute-0 sudo[272654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:37 compute-0 sudo[272654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:37 compute-0 sudo[272654]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:37 compute-0 sudo[272679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:09:37 compute-0 sudo[272679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:37 compute-0 sudo[272679]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:37 compute-0 sudo[272704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:37 compute-0 sudo[272704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:37 compute-0 sudo[272704]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:37 compute-0 sudo[272729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:09:37 compute-0 sudo[272729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:37 compute-0 ceph-mon[74357]: pgmap v1396: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 30 op/s
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:09:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.141029257 +0000 UTC m=+0.022968949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:09:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.296952213 +0000 UTC m=+0.178891865 container create 638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:09:38 compute-0 systemd[1]: Started libpod-conmon-638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72.scope.
Jan 27 09:09:38 compute-0 podman[272810]: 2026-01-27 09:09:38.416977076 +0000 UTC m=+0.079777373 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:09:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.511382169 +0000 UTC m=+0.393321831 container init 638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.518071772 +0000 UTC m=+0.400011414 container start 638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:09:38 compute-0 systemd[1]: libpod-638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72.scope: Deactivated successfully.
Jan 27 09:09:38 compute-0 cranky_chebyshev[272829]: 167 167
Jan 27 09:09:38 compute-0 conmon[272829]: conmon 638332f1e5398ba24d4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72.scope/container/memory.events
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.530344588 +0000 UTC m=+0.412284230 container attach 638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.530736439 +0000 UTC m=+0.412676081 container died 638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cf578ac40e0d2c2df97d77ab34835bbc3a8d46dc29e94789fb8fc2598c071d-merged.mount: Deactivated successfully.
Jan 27 09:09:38 compute-0 podman[272796]: 2026-01-27 09:09:38.572645385 +0000 UTC m=+0.454585027 container remove 638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:09:38 compute-0 systemd[1]: libpod-conmon-638332f1e5398ba24d4aa864298f65b21d379f425791c8cffd19b4833f108f72.scope: Deactivated successfully.
Jan 27 09:09:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Jan 27 09:09:38 compute-0 podman[272857]: 2026-01-27 09:09:38.75557306 +0000 UTC m=+0.046259086 container create 4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:09:38 compute-0 systemd[1]: Started libpod-conmon-4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c.scope.
Jan 27 09:09:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb83974f5494afce6da80d3c79adaa0f0c888c9acb1920ea6d6ac1f9c7432b61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb83974f5494afce6da80d3c79adaa0f0c888c9acb1920ea6d6ac1f9c7432b61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb83974f5494afce6da80d3c79adaa0f0c888c9acb1920ea6d6ac1f9c7432b61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb83974f5494afce6da80d3c79adaa0f0c888c9acb1920ea6d6ac1f9c7432b61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb83974f5494afce6da80d3c79adaa0f0c888c9acb1920ea6d6ac1f9c7432b61/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:38 compute-0 podman[272857]: 2026-01-27 09:09:38.737148406 +0000 UTC m=+0.027834512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:09:38 compute-0 podman[272857]: 2026-01-27 09:09:38.840442632 +0000 UTC m=+0.131128678 container init 4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:09:38 compute-0 podman[272857]: 2026-01-27 09:09:38.850568769 +0000 UTC m=+0.141254805 container start 4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:38 compute-0 podman[272857]: 2026-01-27 09:09:38.853481129 +0000 UTC m=+0.144167225 container attach 4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dirac, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:09:39 compute-0 tender_dirac[272874]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:09:39 compute-0 tender_dirac[272874]: --> relative data size: 1.0
Jan 27 09:09:39 compute-0 tender_dirac[272874]: --> All data devices are unavailable
Jan 27 09:09:39 compute-0 systemd[1]: libpod-4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c.scope: Deactivated successfully.
Jan 27 09:09:39 compute-0 podman[272889]: 2026-01-27 09:09:39.725107845 +0000 UTC m=+0.021562510 container died 4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb83974f5494afce6da80d3c79adaa0f0c888c9acb1920ea6d6ac1f9c7432b61-merged.mount: Deactivated successfully.
Jan 27 09:09:39 compute-0 podman[272889]: 2026-01-27 09:09:39.768861773 +0000 UTC m=+0.065316418 container remove 4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:09:39 compute-0 systemd[1]: libpod-conmon-4e5d27520f23d41cc3a1c3bd90991c71185fef9a8989961f723f7baaab0dfa3c.scope: Deactivated successfully.
Jan 27 09:09:39 compute-0 sudo[272729]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:39 compute-0 sudo[272904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:39 compute-0 sudo[272904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:39 compute-0 sudo[272904]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:39 compute-0 sudo[272929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:09:39 compute-0 sudo[272929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:39 compute-0 sudo[272929]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:39 compute-0 ceph-mon[74357]: pgmap v1397: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Jan 27 09:09:39 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 27 09:09:39 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:39.980791) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:09:39 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 27 09:09:39 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504979980836, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2154, "num_deletes": 253, "total_data_size": 3910687, "memory_usage": 3968704, "flush_reason": "Manual Compaction"}
Jan 27 09:09:39 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504980002086, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3832330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28949, "largest_seqno": 31102, "table_properties": {"data_size": 3822517, "index_size": 6243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20058, "raw_average_key_size": 20, "raw_value_size": 3802959, "raw_average_value_size": 3888, "num_data_blocks": 273, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504764, "oldest_key_time": 1769504764, "file_creation_time": 1769504979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 21329 microseconds, and 10758 cpu microseconds.
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.002126) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3832330 bytes OK
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.002140) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.003014) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.003027) EVENT_LOG_v1 {"time_micros": 1769504980003023, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.003043) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3901968, prev total WAL file size 3901968, number of live WAL files 2.
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.003859) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3742KB)], [65(8940KB)]
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504980003904, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12987008, "oldest_snapshot_seqno": -1}
Jan 27 09:09:40 compute-0 sudo[272954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:40 compute-0 sudo[272954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:40 compute-0 sudo[272954]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5681 keys, 10953444 bytes, temperature: kUnknown
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504980059117, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10953444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10913497, "index_size": 24669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 143499, "raw_average_key_size": 25, "raw_value_size": 10808936, "raw_average_value_size": 1902, "num_data_blocks": 1006, "num_entries": 5681, "num_filter_entries": 5681, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769504980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:09:40 compute-0 sudo[272979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.059369) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10953444 bytes
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.060658) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.0 rd, 198.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 6205, records dropped: 524 output_compression: NoCompression
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.060678) EVENT_LOG_v1 {"time_micros": 1769504980060668, "job": 36, "event": "compaction_finished", "compaction_time_micros": 55271, "compaction_time_cpu_micros": 23334, "output_level": 6, "num_output_files": 1, "total_output_size": 10953444, "num_input_records": 6205, "num_output_records": 5681, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504980061360, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769504980062787, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.003811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.062832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.062837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.062839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.062841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:09:40 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:09:40.062843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:09:40 compute-0 sudo[272979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:40 compute-0 podman[273044]: 2026-01-27 09:09:40.440057436 +0000 UTC m=+0.036737847 container create 244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:09:40 compute-0 systemd[1]: Started libpod-conmon-244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1.scope.
Jan 27 09:09:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:09:40 compute-0 podman[273044]: 2026-01-27 09:09:40.504421817 +0000 UTC m=+0.101102228 container init 244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:09:40 compute-0 podman[273044]: 2026-01-27 09:09:40.511343716 +0000 UTC m=+0.108024107 container start 244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:09:40 compute-0 podman[273044]: 2026-01-27 09:09:40.514438441 +0000 UTC m=+0.111118892 container attach 244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swartz, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:40 compute-0 vibrant_swartz[273060]: 167 167
Jan 27 09:09:40 compute-0 systemd[1]: libpod-244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1.scope: Deactivated successfully.
Jan 27 09:09:40 compute-0 conmon[273060]: conmon 244f3cfc9fe9961a89d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1.scope/container/memory.events
Jan 27 09:09:40 compute-0 podman[273044]: 2026-01-27 09:09:40.423876323 +0000 UTC m=+0.020556744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:09:40 compute-0 podman[273066]: 2026-01-27 09:09:40.556685616 +0000 UTC m=+0.025648762 container died 244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-daaf5fe97e863fdda0f5daecdeabbd3d5c7e7e234261768b8b43dec838e5e9dc-merged.mount: Deactivated successfully.
Jan 27 09:09:40 compute-0 podman[273066]: 2026-01-27 09:09:40.591938811 +0000 UTC m=+0.060901927 container remove 244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:40 compute-0 systemd[1]: libpod-conmon-244f3cfc9fe9961a89d0e2a8da97b6bbbc6337c298d88d677b63c861c518c3a1.scope: Deactivated successfully.
Jan 27 09:09:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 76 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 19 op/s
Jan 27 09:09:40 compute-0 podman[273089]: 2026-01-27 09:09:40.758855317 +0000 UTC m=+0.039466480 container create 0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:40 compute-0 systemd[1]: Started libpod-conmon-0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c.scope.
Jan 27 09:09:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fc62fbdf7ddfcf51705540dffb11dedb375467d32b6bd66d01add969649fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fc62fbdf7ddfcf51705540dffb11dedb375467d32b6bd66d01add969649fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fc62fbdf7ddfcf51705540dffb11dedb375467d32b6bd66d01add969649fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fc62fbdf7ddfcf51705540dffb11dedb375467d32b6bd66d01add969649fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:40 compute-0 podman[273089]: 2026-01-27 09:09:40.830765995 +0000 UTC m=+0.111377098 container init 0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sammet, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:09:40 compute-0 podman[273089]: 2026-01-27 09:09:40.740949658 +0000 UTC m=+0.021560771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:09:40 compute-0 podman[273089]: 2026-01-27 09:09:40.839620777 +0000 UTC m=+0.120231860 container start 0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sammet, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 09:09:40 compute-0 podman[273089]: 2026-01-27 09:09:40.842898577 +0000 UTC m=+0.123509690 container attach 0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 09:09:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:41 compute-0 ceph-mon[74357]: pgmap v1398: 305 pgs: 305 active+clean; 76 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 19 op/s
Jan 27 09:09:41 compute-0 zen_sammet[273106]: {
Jan 27 09:09:41 compute-0 zen_sammet[273106]:     "0": [
Jan 27 09:09:41 compute-0 zen_sammet[273106]:         {
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "devices": [
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "/dev/loop3"
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             ],
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "lv_name": "ceph_lv0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "lv_size": "7511998464",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "name": "ceph_lv0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "tags": {
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.cluster_name": "ceph",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.crush_device_class": "",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.encrypted": "0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.osd_id": "0",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.type": "block",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:                 "ceph.vdo": "0"
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             },
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "type": "block",
Jan 27 09:09:41 compute-0 zen_sammet[273106]:             "vg_name": "ceph_vg0"
Jan 27 09:09:41 compute-0 zen_sammet[273106]:         }
Jan 27 09:09:41 compute-0 zen_sammet[273106]:     ]
Jan 27 09:09:41 compute-0 zen_sammet[273106]: }
Jan 27 09:09:41 compute-0 systemd[1]: libpod-0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c.scope: Deactivated successfully.
Jan 27 09:09:41 compute-0 podman[273089]: 2026-01-27 09:09:41.769535169 +0000 UTC m=+1.050146252 container died 0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sammet, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab3fc62fbdf7ddfcf51705540dffb11dedb375467d32b6bd66d01add969649fe-merged.mount: Deactivated successfully.
Jan 27 09:09:41 compute-0 podman[273089]: 2026-01-27 09:09:41.837827988 +0000 UTC m=+1.118439071 container remove 0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sammet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:41 compute-0 systemd[1]: libpod-conmon-0a13a9bda24ba54dd66e35837fc3257a9324e6b6599c8afca885ac1ea2a77d4c.scope: Deactivated successfully.
Jan 27 09:09:41 compute-0 sudo[272979]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:41 compute-0 sudo[273130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:41 compute-0 sudo[273130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:41 compute-0 sudo[273130]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:41 compute-0 sudo[273155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:09:41 compute-0 sudo[273155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:41 compute-0 sudo[273155]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:42 compute-0 sudo[273180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:42 compute-0 sudo[273180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:42 compute-0 sudo[273180]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:42 compute-0 sudo[273205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:09:42 compute-0 sudo[273205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:42.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.409362934 +0000 UTC m=+0.032652634 container create ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:09:42 compute-0 systemd[1]: Started libpod-conmon-ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d.scope.
Jan 27 09:09:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.484700505 +0000 UTC m=+0.107990215 container init ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.492011725 +0000 UTC m=+0.115301415 container start ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.395457104 +0000 UTC m=+0.018746834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.495589273 +0000 UTC m=+0.118878993 container attach ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:42 compute-0 festive_driscoll[273289]: 167 167
Jan 27 09:09:42 compute-0 systemd[1]: libpod-ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d.scope: Deactivated successfully.
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.497552087 +0000 UTC m=+0.120841787 container died ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 09:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1473518391a337a056ca477b137a4abe17b327012e96cf88a21e3d66b398d98a-merged.mount: Deactivated successfully.
Jan 27 09:09:42 compute-0 podman[273272]: 2026-01-27 09:09:42.535434994 +0000 UTC m=+0.158724694 container remove ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:09:42 compute-0 systemd[1]: libpod-conmon-ef960355a8281833505986beabb41a5c90bb8b862f5895a0b8032bbde5efc10d.scope: Deactivated successfully.
Jan 27 09:09:42 compute-0 podman[273312]: 2026-01-27 09:09:42.679727531 +0000 UTC m=+0.039549733 container create aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:42 compute-0 systemd[1]: Started libpod-conmon-aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887.scope.
Jan 27 09:09:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:09:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f810660dbf4ce4268ef92256123ea8f46bf5bf1e63b5eb5b57b78840ab6a3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f810660dbf4ce4268ef92256123ea8f46bf5bf1e63b5eb5b57b78840ab6a3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f810660dbf4ce4268ef92256123ea8f46bf5bf1e63b5eb5b57b78840ab6a3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f810660dbf4ce4268ef92256123ea8f46bf5bf1e63b5eb5b57b78840ab6a3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:42 compute-0 podman[273312]: 2026-01-27 09:09:42.661503353 +0000 UTC m=+0.021325585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:09:42 compute-0 podman[273312]: 2026-01-27 09:09:42.773743733 +0000 UTC m=+0.133565955 container init aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:09:42 compute-0 podman[273312]: 2026-01-27 09:09:42.781757432 +0000 UTC m=+0.141579634 container start aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:09:42 compute-0 podman[273312]: 2026-01-27 09:09:42.78567402 +0000 UTC m=+0.145496242 container attach aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:09:43 compute-0 unruffled_booth[273329]: {
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:         "osd_id": 0,
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:         "type": "bluestore"
Jan 27 09:09:43 compute-0 unruffled_booth[273329]:     }
Jan 27 09:09:43 compute-0 unruffled_booth[273329]: }
Jan 27 09:09:43 compute-0 systemd[1]: libpod-aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887.scope: Deactivated successfully.
Jan 27 09:09:43 compute-0 podman[273312]: 2026-01-27 09:09:43.630828622 +0000 UTC m=+0.990650834 container died aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f810660dbf4ce4268ef92256123ea8f46bf5bf1e63b5eb5b57b78840ab6a3a-merged.mount: Deactivated successfully.
Jan 27 09:09:43 compute-0 podman[273312]: 2026-01-27 09:09:43.687490362 +0000 UTC m=+1.047312564 container remove aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:09:43 compute-0 systemd[1]: libpod-conmon-aa08adbb15e49eea94cd34746e7bcb66adcae90354f88dc50d864e5ddf994887.scope: Deactivated successfully.
Jan 27 09:09:43 compute-0 sudo[273205]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:09:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:09:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 65ee3efb-b36e-4e64-9568-89c33e31ac61 does not exist
Jan 27 09:09:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c55a1c65-c1db-4a6e-a20e-f2cc241f838e does not exist
Jan 27 09:09:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev afc3bfbe-9c28-4b26-aa70-2b45d307b19c does not exist
Jan 27 09:09:43 compute-0 ceph-mon[74357]: pgmap v1399: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:09:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1769723226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1769723226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:43 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:09:43 compute-0 sudo[273363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:43 compute-0 sudo[273363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:43 compute-0 sudo[273363]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:43 compute-0 sudo[273388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:09:43 compute-0 sudo[273388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:43 compute-0 sudo[273388]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:44.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:09:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:44.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:09:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:09:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:09:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942711538' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:09:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942711538' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:46 compute-0 ceph-mon[74357]: pgmap v1400: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:09:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/942711538' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/942711538' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:46.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:46.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 27 09:09:47 compute-0 nova_compute[247671]: 2026-01-27 09:09:47.343 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:09:47 compute-0 ceph-mon[74357]: pgmap v1401: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 27 09:09:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:48.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:48 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/326214208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:48 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/326214208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 09:09:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:09:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3616294357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:09:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3616294357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:49 compute-0 ceph-mon[74357]: pgmap v1402: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 09:09:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3616294357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3616294357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 53 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 27 09:09:50 compute-0 sudo[273417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:50 compute-0 sudo[273417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:50 compute-0 sudo[273417]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:50 compute-0 sudo[273442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:09:50 compute-0 sudo[273442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:09:50 compute-0 sudo[273442]: pam_unix(sudo:session): session closed for user root
Jan 27 09:09:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:51 compute-0 ceph-mon[74357]: pgmap v1403: 305 pgs: 305 active+clean; 53 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 27 09:09:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:52.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:52.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 594 KiB/s wr, 81 op/s
Jan 27 09:09:53 compute-0 ceph-mon[74357]: pgmap v1404: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 594 KiB/s wr, 81 op/s
Jan 27 09:09:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:54.249 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:09:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:54.250 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:09:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:54.250 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:09:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:54.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 56 op/s
Jan 27 09:09:55 compute-0 podman[273469]: 2026-01-27 09:09:55.269732941 +0000 UTC m=+0.087591057 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 09:09:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:55 compute-0 ceph-mon[74357]: pgmap v1405: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 56 op/s
Jan 27 09:09:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:09:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:56.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:09:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:56.535 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:09:56 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:09:56.536 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:09:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 56 op/s
Jan 27 09:09:57 compute-0 ceph-mon[74357]: pgmap v1406: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 56 op/s
Jan 27 09:09:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:09:58.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:09:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:09:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:09:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:09:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 852 B/s wr, 32 op/s
Jan 27 09:09:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:09:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4040611263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:09:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4040611263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:09:59 compute-0 ceph-mon[74357]: pgmap v1407: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 852 B/s wr, 32 op/s
Jan 27 09:09:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4040611263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:09:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4040611263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:10:00 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 09:10:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:00.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 853 B/s wr, 32 op/s
Jan 27 09:10:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:00 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 09:10:01 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:01.537 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:10:01 compute-0 ceph-mon[74357]: pgmap v1408: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 853 B/s wr, 32 op/s
Jan 27 09:10:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:02.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 09:10:04 compute-0 ceph-mon[74357]: pgmap v1409: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 09:10:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:04.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:04.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:05 compute-0 nova_compute[247671]: 2026-01-27 09:10:05.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:05 compute-0 nova_compute[247671]: 2026-01-27 09:10:05.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:10:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:06 compute-0 ceph-mon[74357]: pgmap v1410: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:06.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:06.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:08 compute-0 ceph-mon[74357]: pgmap v1411: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:08.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:08.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:09 compute-0 podman[273502]: 2026-01-27 09:10:09.253339548 +0000 UTC m=+0.062511911 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 09:10:09 compute-0 ceph-mon[74357]: pgmap v1412: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:09 compute-0 nova_compute[247671]: 2026-01-27 09:10:09.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:09 compute-0 nova_compute[247671]: 2026-01-27 09:10:09.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:10.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:10.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:10 compute-0 sudo[273523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:10 compute-0 sudo[273523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:10 compute-0 sudo[273523]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:10 compute-0 sudo[273548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:10 compute-0 sudo[273548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:10 compute-0 sudo[273548]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:11 compute-0 nova_compute[247671]: 2026-01-27 09:10:11.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:11 compute-0 ceph-mon[74357]: pgmap v1413: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:12.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:12.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:13 compute-0 nova_compute[247671]: 2026-01-27 09:10:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:13 compute-0 ceph-mon[74357]: pgmap v1414: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:14.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:14 compute-0 nova_compute[247671]: 2026-01-27 09:10:14.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:14 compute-0 nova_compute[247671]: 2026-01-27 09:10:14.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:10:14 compute-0 nova_compute[247671]: 2026-01-27 09:10:14.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:10:14 compute-0 nova_compute[247671]: 2026-01-27 09:10:14.476 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:10:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:10:15
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log']
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:10:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.458 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.458 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.458 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.458 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.458 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:10:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:10:15 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280871750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:15 compute-0 nova_compute[247671]: 2026-01-27 09:10:15.926 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:10:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.070 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.071 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.072 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.072 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:10:16 compute-0 ceph-mon[74357]: pgmap v1415: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:16 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4280871750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.388 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.388 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.388 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.672 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing inventories for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 09:10:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.802 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating ProviderTree inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.802 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.832 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing aggregate associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.869 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing trait associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 09:10:16 compute-0 nova_compute[247671]: 2026-01-27 09:10:16.914 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:10:17 compute-0 ceph-mon[74357]: pgmap v1416: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:17 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1907621631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:17 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/785481654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:10:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152699789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:17 compute-0 nova_compute[247671]: 2026-01-27 09:10:17.364 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:10:17 compute-0 nova_compute[247671]: 2026-01-27 09:10:17.369 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:10:17 compute-0 nova_compute[247671]: 2026-01-27 09:10:17.391 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:10:17 compute-0 nova_compute[247671]: 2026-01-27 09:10:17.392 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:10:17 compute-0 nova_compute[247671]: 2026-01-27 09:10:17.392 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:10:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/152699789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4153928588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2306265436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:10:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:18.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:19 compute-0 ceph-mon[74357]: pgmap v1417: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:20.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:22 compute-0 ceph-mon[74357]: pgmap v1418: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:22.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:22.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:24 compute-0 ceph-mon[74357]: pgmap v1419: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:24.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:24.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:24 compute-0 nova_compute[247671]: 2026-01-27 09:10:24.394 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:24 compute-0 nova_compute[247671]: 2026-01-27 09:10:24.394 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:10:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:26 compute-0 ceph-mon[74357]: pgmap v1420: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:26 compute-0 podman[273624]: 2026-01-27 09:10:26.307570854 +0000 UTC m=+0.127771577 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:10:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:26.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:26.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:28 compute-0 ceph-mon[74357]: pgmap v1421: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:30 compute-0 ceph-mon[74357]: pgmap v1422: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:30.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:30.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:30 compute-0 sudo[273654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:30 compute-0 sudo[273654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:30 compute-0 sudo[273654]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:31 compute-0 sudo[273679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:31 compute-0 sudo[273679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:31 compute-0 sudo[273679]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:32 compute-0 ceph-mon[74357]: pgmap v1423: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:32.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:34 compute-0 ceph-mon[74357]: pgmap v1424: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:35 compute-0 ceph-mon[74357]: pgmap v1425: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:36.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:36 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:36.717 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:10:36 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:36.717 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:10:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:37 compute-0 ceph-mon[74357]: pgmap v1426: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:39 compute-0 ceph-mon[74357]: pgmap v1427: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:40 compute-0 podman[273708]: 2026-01-27 09:10:40.388256236 +0000 UTC m=+0.047650635 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:10:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:40.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:41 compute-0 ceph-mon[74357]: pgmap v1428: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:42.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:43 compute-0 ceph-mon[74357]: pgmap v1429: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:44 compute-0 sudo[273730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:44 compute-0 sudo[273730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:44 compute-0 sudo[273730]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:44 compute-0 sudo[273755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:10:44 compute-0 sudo[273755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:44 compute-0 sudo[273755]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:10:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:10:44 compute-0 sudo[273780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:44 compute-0 sudo[273780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:44 compute-0 sudo[273780]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:44 compute-0 sudo[273805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:10:44 compute-0 sudo[273805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 09:10:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:10:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 09:10:44 compute-0 sudo[273805]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:10:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 27 09:10:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 09:10:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6ca96524-c8f9-4350-8376-99555a98822c does not exist
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1de87a77-c2b8-454e-8658-f449c9fae9b3 does not exist
Jan 27 09:10:45 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3a5dcba8-0ef5-44c5-bb1e-1dd521658b0a does not exist
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:10:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: pgmap v1430: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:10:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:10:45 compute-0 sudo[273862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:45 compute-0 sudo[273862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:45 compute-0 sudo[273862]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:45 compute-0 sudo[273887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:10:45 compute-0 sudo[273887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:45 compute-0 sudo[273887]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:45 compute-0 sudo[273912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:45 compute-0 sudo[273912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:45 compute-0 sudo[273912]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:46 compute-0 sudo[273937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:10:46 compute-0 sudo[273937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.340381509 +0000 UTC m=+0.061264347 container create c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:10:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:46 compute-0 systemd[1]: Started libpod-conmon-c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b.scope.
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.298432382 +0000 UTC m=+0.019315250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:10:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:10:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:46.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.504516489 +0000 UTC m=+0.225399347 container init c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.511135601 +0000 UTC m=+0.232018439 container start c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 27 09:10:46 compute-0 amazing_darwin[274016]: 167 167
Jan 27 09:10:46 compute-0 systemd[1]: libpod-c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b.scope: Deactivated successfully.
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.568440768 +0000 UTC m=+0.289323606 container attach c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.570004612 +0000 UTC m=+0.290887450 container died c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_darwin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:10:46 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:46.720 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:10:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-45acea07dc10ccb07d874fd27b8f150d0d4a513d27988c3c281add7a4a1a80de-merged.mount: Deactivated successfully.
Jan 27 09:10:46 compute-0 podman[274000]: 2026-01-27 09:10:46.828081842 +0000 UTC m=+0.548964680 container remove c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_darwin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:10:46 compute-0 systemd[1]: libpod-conmon-c872b300e544ade37ec8b425e1df63808ed398a1a8a88a7425e180d6db15e32b.scope: Deactivated successfully.
Jan 27 09:10:46 compute-0 podman[274041]: 2026-01-27 09:10:46.985149089 +0000 UTC m=+0.050185964 container create ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:10:47 compute-0 systemd[1]: Started libpod-conmon-ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3.scope.
Jan 27 09:10:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:10:47 compute-0 podman[274041]: 2026-01-27 09:10:46.958247213 +0000 UTC m=+0.023284118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d316481d7788c44a36ae36215f9ec1d0690d83b8f2689af933f4dfa70a994b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d316481d7788c44a36ae36215f9ec1d0690d83b8f2689af933f4dfa70a994b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d316481d7788c44a36ae36215f9ec1d0690d83b8f2689af933f4dfa70a994b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d316481d7788c44a36ae36215f9ec1d0690d83b8f2689af933f4dfa70a994b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d316481d7788c44a36ae36215f9ec1d0690d83b8f2689af933f4dfa70a994b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:47 compute-0 podman[274041]: 2026-01-27 09:10:47.06999294 +0000 UTC m=+0.135029835 container init ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 09:10:47 compute-0 podman[274041]: 2026-01-27 09:10:47.075430439 +0000 UTC m=+0.140467354 container start ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:10:47 compute-0 podman[274041]: 2026-01-27 09:10:47.078792951 +0000 UTC m=+0.143829866 container attach ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:10:47 compute-0 ceph-mon[74357]: pgmap v1431: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:47 compute-0 musing_aryabhata[274057]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:10:47 compute-0 musing_aryabhata[274057]: --> relative data size: 1.0
Jan 27 09:10:47 compute-0 musing_aryabhata[274057]: --> All data devices are unavailable
Jan 27 09:10:47 compute-0 systemd[1]: libpod-ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3.scope: Deactivated successfully.
Jan 27 09:10:47 compute-0 conmon[274057]: conmon ef5533bd2547089228ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3.scope/container/memory.events
Jan 27 09:10:47 compute-0 podman[274041]: 2026-01-27 09:10:47.934076931 +0000 UTC m=+0.999113796 container died ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5d316481d7788c44a36ae36215f9ec1d0690d83b8f2689af933f4dfa70a994b-merged.mount: Deactivated successfully.
Jan 27 09:10:47 compute-0 podman[274041]: 2026-01-27 09:10:47.982716372 +0000 UTC m=+1.047753247 container remove ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:10:47 compute-0 systemd[1]: libpod-conmon-ef5533bd2547089228ba0d3c0cb6cf0e07764fe6a6997b46dc9d0d5adde393d3.scope: Deactivated successfully.
Jan 27 09:10:48 compute-0 sudo[273937]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:48 compute-0 sudo[274086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:48 compute-0 sudo[274086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:48 compute-0 sudo[274086]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:48 compute-0 sudo[274111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:10:48 compute-0 sudo[274111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:48 compute-0 sudo[274111]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:48 compute-0 sudo[274136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:48 compute-0 sudo[274136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:48 compute-0 sudo[274136]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:48 compute-0 sudo[274161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:10:48 compute-0 sudo[274161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:48.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.531702331 +0000 UTC m=+0.040547121 container create 8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:10:48 compute-0 systemd[1]: Started libpod-conmon-8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3.scope.
Jan 27 09:10:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.591434465 +0000 UTC m=+0.100279255 container init 8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.596911505 +0000 UTC m=+0.105756315 container start 8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hawking, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.599979179 +0000 UTC m=+0.108823969 container attach 8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:10:48 compute-0 sweet_hawking[274243]: 167 167
Jan 27 09:10:48 compute-0 systemd[1]: libpod-8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3.scope: Deactivated successfully.
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.601028078 +0000 UTC m=+0.109872868 container died 8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.515432296 +0000 UTC m=+0.024277116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce6cfae73304e778b16d7ecd53e18187c966ec4a5cae3f581ed5c82d45bb79c-merged.mount: Deactivated successfully.
Jan 27 09:10:48 compute-0 podman[274226]: 2026-01-27 09:10:48.633224369 +0000 UTC m=+0.142069159 container remove 8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:10:48 compute-0 systemd[1]: libpod-conmon-8df4f18103416e386e475fa3b59cb213b8c50f1841199125f8f766bbce89e8d3.scope: Deactivated successfully.
Jan 27 09:10:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:48 compute-0 podman[274267]: 2026-01-27 09:10:48.778063241 +0000 UTC m=+0.042401190 container create 585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 27 09:10:48 compute-0 systemd[1]: Started libpod-conmon-585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240.scope.
Jan 27 09:10:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300ee9f926d57220ae8174fe9f0f4a5bc9a32ba142bb0bd2a4f2afba4ebe7162/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300ee9f926d57220ae8174fe9f0f4a5bc9a32ba142bb0bd2a4f2afba4ebe7162/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300ee9f926d57220ae8174fe9f0f4a5bc9a32ba142bb0bd2a4f2afba4ebe7162/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:48 compute-0 podman[274267]: 2026-01-27 09:10:48.758779544 +0000 UTC m=+0.023117523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300ee9f926d57220ae8174fe9f0f4a5bc9a32ba142bb0bd2a4f2afba4ebe7162/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:48 compute-0 podman[274267]: 2026-01-27 09:10:48.867610941 +0000 UTC m=+0.131948920 container init 585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:10:48 compute-0 podman[274267]: 2026-01-27 09:10:48.872596288 +0000 UTC m=+0.136934237 container start 585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:10:48 compute-0 podman[274267]: 2026-01-27 09:10:48.875634851 +0000 UTC m=+0.139972840 container attach 585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:10:49 compute-0 eager_easley[274284]: {
Jan 27 09:10:49 compute-0 eager_easley[274284]:     "0": [
Jan 27 09:10:49 compute-0 eager_easley[274284]:         {
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "devices": [
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "/dev/loop3"
Jan 27 09:10:49 compute-0 eager_easley[274284]:             ],
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "lv_name": "ceph_lv0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "lv_size": "7511998464",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "name": "ceph_lv0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "tags": {
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.cluster_name": "ceph",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.crush_device_class": "",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.encrypted": "0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.osd_id": "0",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.type": "block",
Jan 27 09:10:49 compute-0 eager_easley[274284]:                 "ceph.vdo": "0"
Jan 27 09:10:49 compute-0 eager_easley[274284]:             },
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "type": "block",
Jan 27 09:10:49 compute-0 eager_easley[274284]:             "vg_name": "ceph_vg0"
Jan 27 09:10:49 compute-0 eager_easley[274284]:         }
Jan 27 09:10:49 compute-0 eager_easley[274284]:     ]
Jan 27 09:10:49 compute-0 eager_easley[274284]: }
Jan 27 09:10:49 compute-0 systemd[1]: libpod-585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240.scope: Deactivated successfully.
Jan 27 09:10:49 compute-0 podman[274267]: 2026-01-27 09:10:49.603247347 +0000 UTC m=+0.867585306 container died 585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-300ee9f926d57220ae8174fe9f0f4a5bc9a32ba142bb0bd2a4f2afba4ebe7162-merged.mount: Deactivated successfully.
Jan 27 09:10:49 compute-0 podman[274267]: 2026-01-27 09:10:49.660765711 +0000 UTC m=+0.925103660 container remove 585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 27 09:10:49 compute-0 systemd[1]: libpod-conmon-585245aefcc15f36a1c705ddb1d1ebb705dd7864fd23492281d0e7471acb4240.scope: Deactivated successfully.
Jan 27 09:10:49 compute-0 sudo[274161]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:49 compute-0 sudo[274306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:49 compute-0 sudo[274306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:49 compute-0 sudo[274306]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:49 compute-0 sudo[274331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:10:49 compute-0 sudo[274331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:49 compute-0 sudo[274331]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:49 compute-0 ceph-mon[74357]: pgmap v1432: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:10:49 compute-0 sudo[274356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:49 compute-0 sudo[274356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:49 compute-0 sudo[274356]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:49 compute-0 sudo[274381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:10:49 compute-0 sudo[274381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:50.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.377966433 +0000 UTC m=+0.061872554 container create 91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noyce, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:10:50 compute-0 systemd[1]: Started libpod-conmon-91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c.scope.
Jan 27 09:10:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:50.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.356448464 +0000 UTC m=+0.040354605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.461401696 +0000 UTC m=+0.145307827 container init 91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.466697151 +0000 UTC m=+0.150603242 container start 91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.470062073 +0000 UTC m=+0.153968264 container attach 91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noyce, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:10:50 compute-0 goofy_noyce[274462]: 167 167
Jan 27 09:10:50 compute-0 systemd[1]: libpod-91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c.scope: Deactivated successfully.
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.471297306 +0000 UTC m=+0.155203417 container died 91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb6ee7a08089ab1608d4f91a73bdaad24b9a39224b81034e922f9f7640e95e2b-merged.mount: Deactivated successfully.
Jan 27 09:10:50 compute-0 podman[274446]: 2026-01-27 09:10:50.510093988 +0000 UTC m=+0.194000079 container remove 91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 27 09:10:50 compute-0 systemd[1]: libpod-conmon-91acbffb9299cd0b241a49de7e7cd5455e2416500329c4d4fa6288a56a70c83c.scope: Deactivated successfully.
Jan 27 09:10:50 compute-0 podman[274486]: 2026-01-27 09:10:50.68269629 +0000 UTC m=+0.047579122 container create d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:10:50 compute-0 systemd[1]: Started libpod-conmon-d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a.scope.
Jan 27 09:10:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04107ac98fb0402317439224f03a700cf19af882e64a6431132106786ef2d976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04107ac98fb0402317439224f03a700cf19af882e64a6431132106786ef2d976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04107ac98fb0402317439224f03a700cf19af882e64a6431132106786ef2d976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04107ac98fb0402317439224f03a700cf19af882e64a6431132106786ef2d976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:50 compute-0 podman[274486]: 2026-01-27 09:10:50.660969016 +0000 UTC m=+0.025851928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:10:50 compute-0 podman[274486]: 2026-01-27 09:10:50.755291797 +0000 UTC m=+0.120174629 container init d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 09:10:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Jan 27 09:10:50 compute-0 podman[274486]: 2026-01-27 09:10:50.762856933 +0000 UTC m=+0.127739765 container start d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:10:50 compute-0 podman[274486]: 2026-01-27 09:10:50.765834394 +0000 UTC m=+0.130717226 container attach d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:10:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:51 compute-0 sudo[274508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:51 compute-0 sudo[274508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:51 compute-0 sudo[274508]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:51 compute-0 sudo[274533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:51 compute-0 sudo[274533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:51 compute-0 sudo[274533]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:51 compute-0 pensive_goodall[274503]: {
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:         "osd_id": 0,
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:         "type": "bluestore"
Jan 27 09:10:51 compute-0 pensive_goodall[274503]:     }
Jan 27 09:10:51 compute-0 pensive_goodall[274503]: }
Jan 27 09:10:51 compute-0 systemd[1]: libpod-d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a.scope: Deactivated successfully.
Jan 27 09:10:51 compute-0 podman[274486]: 2026-01-27 09:10:51.657471339 +0000 UTC m=+1.022354191 container died d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 09:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-04107ac98fb0402317439224f03a700cf19af882e64a6431132106786ef2d976-merged.mount: Deactivated successfully.
Jan 27 09:10:51 compute-0 podman[274486]: 2026-01-27 09:10:51.709155293 +0000 UTC m=+1.074038125 container remove d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 09:10:51 compute-0 systemd[1]: libpod-conmon-d952326c65df867bd078b6d7d0be03eda33b025528b3a73167d09ff4183dc49a.scope: Deactivated successfully.
Jan 27 09:10:51 compute-0 sudo[274381]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:10:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:10:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b42cc83a-eea2-4b7b-b39d-3015e4614b26 does not exist
Jan 27 09:10:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 66596e4e-4003-4aad-8702-ff433f6a0d12 does not exist
Jan 27 09:10:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5090b21d-937a-4348-8471-dbd95ba47ce5 does not exist
Jan 27 09:10:51 compute-0 sudo[274586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:10:51 compute-0 sudo[274586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:51 compute-0 sudo[274586]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:51 compute-0 sudo[274611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:10:51 compute-0 sudo[274611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:10:51 compute-0 sudo[274611]: pam_unix(sudo:session): session closed for user root
Jan 27 09:10:51 compute-0 ceph-mon[74357]: pgmap v1433: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Jan 27 09:10:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:10:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:52.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 09:10:53 compute-0 ceph-mon[74357]: pgmap v1434: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 09:10:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:54.251 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:10:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:54.252 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:10:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:10:54.252 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:10:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:54.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 09:10:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.964051) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505055964134, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1170, "num_deletes": 506, "total_data_size": 1455730, "memory_usage": 1486536, "flush_reason": "Manual Compaction"}
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505055970509, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 940054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31103, "largest_seqno": 32272, "table_properties": {"data_size": 935520, "index_size": 1610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14269, "raw_average_key_size": 19, "raw_value_size": 924025, "raw_average_value_size": 1253, "num_data_blocks": 70, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769504981, "oldest_key_time": 1769504981, "file_creation_time": 1769505055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 6488 microseconds, and 3255 cpu microseconds.
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.970551) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 940054 bytes OK
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.970570) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.973432) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.973445) EVENT_LOG_v1 {"time_micros": 1769505055973440, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.973462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1449350, prev total WAL file size 1449350, number of live WAL files 2.
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.974198) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(918KB)], [68(10MB)]
Jan 27 09:10:55 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505055974275, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11893498, "oldest_snapshot_seqno": -1}
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5421 keys, 8425115 bytes, temperature: kUnknown
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505056033818, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8425115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8390008, "index_size": 20469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 139506, "raw_average_key_size": 25, "raw_value_size": 8293027, "raw_average_value_size": 1529, "num_data_blocks": 829, "num_entries": 5421, "num_filter_entries": 5421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769505055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.034141) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8425115 bytes
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.035294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.3 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(21.6) write-amplify(9.0) OK, records in: 6418, records dropped: 997 output_compression: NoCompression
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.035346) EVENT_LOG_v1 {"time_micros": 1769505056035329, "job": 38, "event": "compaction_finished", "compaction_time_micros": 59668, "compaction_time_cpu_micros": 19382, "output_level": 6, "num_output_files": 1, "total_output_size": 8425115, "num_input_records": 6418, "num_output_records": 5421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505056035716, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505056037853, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:55.974044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.037916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.037921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.037923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.037925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:56 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:10:56.037927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:56 compute-0 ceph-mon[74357]: pgmap v1435: 305 pgs: 305 active+clean; 41 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 09:10:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:10:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:10:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:56.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:10:57 compute-0 podman[274639]: 2026-01-27 09:10:57.27803804 +0000 UTC m=+0.091943287 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 27 09:10:58 compute-0 ceph-mon[74357]: pgmap v1436: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:10:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:10:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:10:58.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:10:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:10:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:10:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:10:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:10:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:11:00 compute-0 ceph-mon[74357]: pgmap v1437: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:11:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3574185752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:11:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3574185752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:11:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:00.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:00.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 22 KiB/s wr, 22 op/s
Jan 27 09:11:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:02 compute-0 ceph-mon[74357]: pgmap v1438: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 22 KiB/s wr, 22 op/s
Jan 27 09:11:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 31 op/s
Jan 27 09:11:04 compute-0 ceph-mon[74357]: pgmap v1439: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 31 op/s
Jan 27 09:11:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:04.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 30 op/s
Jan 27 09:11:05 compute-0 nova_compute[247671]: 2026-01-27 09:11:05.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:05 compute-0 nova_compute[247671]: 2026-01-27 09:11:05.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:11:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:06 compute-0 ceph-mon[74357]: pgmap v1440: 305 pgs: 305 active+clean; 42 MiB data, 264 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 30 op/s
Jan 27 09:11:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 42 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 109 KiB/s rd, 21 KiB/s wr, 181 op/s
Jan 27 09:11:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2605242198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:11:07 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2605242198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:11:07 compute-0 nova_compute[247671]: 2026-01-27 09:11:07.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:08 compute-0 ceph-mon[74357]: pgmap v1441: 305 pgs: 305 active+clean; 42 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 109 KiB/s rd, 21 KiB/s wr, 181 op/s
Jan 27 09:11:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:08.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 42 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:11:09 compute-0 nova_compute[247671]: 2026-01-27 09:11:09.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:10 compute-0 ceph-mon[74357]: pgmap v1442: 305 pgs: 305 active+clean; 42 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 27 09:11:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:10.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:10 compute-0 nova_compute[247671]: 2026-01-27 09:11:10.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:10.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 118 KiB/s rd, 341 B/s wr, 192 op/s
Jan 27 09:11:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:11 compute-0 podman[274674]: 2026-01-27 09:11:11.262805638 +0000 UTC m=+0.079051983 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:11 compute-0 sudo[274695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:11 compute-0 sudo[274695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:11 compute-0 sudo[274695]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:11 compute-0 sudo[274720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:11 compute-0 sudo[274720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:11 compute-0 sudo[274720]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:12 compute-0 ceph-mon[74357]: pgmap v1443: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 118 KiB/s rd, 341 B/s wr, 192 op/s
Jan 27 09:11:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:12.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 108 KiB/s rd, 597 B/s wr, 176 op/s
Jan 27 09:11:13 compute-0 nova_compute[247671]: 2026-01-27 09:11:13.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:14 compute-0 ceph-mon[74357]: pgmap v1444: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 108 KiB/s rd, 597 B/s wr, 176 op/s
Jan 27 09:11:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:14 compute-0 nova_compute[247671]: 2026-01-27 09:11:14.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:14 compute-0 nova_compute[247671]: 2026-01-27 09:11:14.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:11:14 compute-0 nova_compute[247671]: 2026-01-27 09:11:14.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:11:14 compute-0 nova_compute[247671]: 2026-01-27 09:11:14.454 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:11:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:14.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 597 B/s wr, 165 op/s
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:11:15
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'default.rgw.log', 'volumes', 'vms']
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:11:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:11:15 compute-0 nova_compute[247671]: 2026-01-27 09:11:15.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:15 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:16 compute-0 ceph-mon[74357]: pgmap v1445: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 597 B/s wr, 165 op/s
Jan 27 09:11:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:16.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 597 B/s wr, 165 op/s
Jan 27 09:11:17 compute-0 ceph-mon[74357]: pgmap v1446: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 597 B/s wr, 165 op/s
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.467 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.467 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.467 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.467 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.468 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:11:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:11:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379669987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:17 compute-0 nova_compute[247671]: 2026-01-27 09:11:17.921 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.066 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.067 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.068 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.068 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:11:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/379669987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.351 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.351 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.352 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:11:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.465 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:11:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:11:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:11:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680816070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.887 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.896 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.922 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.924 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:11:18 compute-0 nova_compute[247671]: 2026-01-27 09:11:18.924 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:11:19 compute-0 ceph-mon[74357]: pgmap v1447: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:11:19 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3743172532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:19 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1680816070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:19 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4256550250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1261400032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3660851559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:11:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:11:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:21 compute-0 ceph-mon[74357]: pgmap v1448: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:11:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 27 09:11:22 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:11:22.921 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:11:22 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:11:22.922 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:11:23 compute-0 ceph-mon[74357]: pgmap v1449: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 27 09:11:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 09:11:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:11:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:24 compute-0 nova_compute[247671]: 2026-01-27 09:11:24.925 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:24 compute-0 nova_compute[247671]: 2026-01-27 09:11:24.926 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:11:25 compute-0 ceph-mon[74357]: pgmap v1450: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:27 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:11:27.923 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:11:27 compute-0 ceph-mon[74357]: pgmap v1451: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:28 compute-0 podman[274797]: 2026-01-27 09:11:28.266558394 +0000 UTC m=+0.076280218 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 09:11:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:30 compute-0 ceph-mon[74357]: pgmap v1452: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:30.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:31 compute-0 sudo[274826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:31 compute-0 sudo[274826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:31 compute-0 sudo[274826]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:31 compute-0 sudo[274851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:31 compute-0 sudo[274851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:31 compute-0 sudo[274851]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:32 compute-0 ceph-mon[74357]: pgmap v1453: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:32.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:32.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:34 compute-0 ceph-mon[74357]: pgmap v1454: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:36 compute-0 ceph-mon[74357]: pgmap v1455: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:36.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:38 compute-0 ceph-mon[74357]: pgmap v1456: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:38.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:38.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:40 compute-0 ceph-mon[74357]: pgmap v1457: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:40.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:40.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:40 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:42 compute-0 ceph-mon[74357]: pgmap v1458: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:42 compute-0 podman[274881]: 2026-01-27 09:11:42.230574274 +0000 UTC m=+0.047502681 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:11:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:44 compute-0 ceph-mon[74357]: pgmap v1459: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:44.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:46 compute-0 ceph-mon[74357]: pgmap v1460: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:48 compute-0 ceph-mon[74357]: pgmap v1461: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:11:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:48.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:11:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:50 compute-0 ceph-mon[74357]: pgmap v1462: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:11:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:50.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:11:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:51 compute-0 sudo[274906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:51 compute-0 sudo[274906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:51 compute-0 sudo[274906]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:51 compute-0 sudo[274931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:51 compute-0 sudo[274931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:51 compute-0 sudo[274931]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:52 compute-0 ceph-mon[74357]: pgmap v1463: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:52 compute-0 sudo[274956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:52 compute-0 sudo[274956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:52 compute-0 sudo[274956]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:52 compute-0 sudo[274981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:11:52 compute-0 sudo[274981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:52 compute-0 sudo[274981]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:52 compute-0 sudo[275006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:52 compute-0 sudo[275006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:52 compute-0 sudo[275006]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:52 compute-0 sudo[275031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:11:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:52.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:52 compute-0 sudo[275031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:52 compute-0 sudo[275031]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 27 09:11:52 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 09:11:53 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 09:11:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:11:53 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:11:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:11:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:11:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:11:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:11:53 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 71b5d0e1-55e9-445b-a367-b3defaddd2f3 does not exist
Jan 27 09:11:53 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c2af833e-9ede-4fd2-b044-901f8fb7ef13 does not exist
Jan 27 09:11:53 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d72fcc4d-9727-4eae-a464-028703c58c7f does not exist
Jan 27 09:11:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:11:53 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:11:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:11:53 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:11:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:11:53 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:11:53 compute-0 sudo[275088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:53 compute-0 sudo[275088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:53 compute-0 sudo[275088]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:53 compute-0 sudo[275113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:11:53 compute-0 sudo[275113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:53 compute-0 sudo[275113]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:53 compute-0 sudo[275138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:53 compute-0 sudo[275138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:53 compute-0 sudo[275138]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:53 compute-0 sudo[275163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:11:53 compute-0 sudo[275163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.046531176 +0000 UTC m=+0.044973123 container create ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 09:11:54 compute-0 systemd[1]: Started libpod-conmon-ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93.scope.
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.027298429 +0000 UTC m=+0.025740396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:11:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.148835604 +0000 UTC m=+0.147277571 container init ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.160621376 +0000 UTC m=+0.159063323 container start ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:11:54 compute-0 xenodochial_meninsky[275247]: 167 167
Jan 27 09:11:54 compute-0 systemd[1]: libpod-ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93.scope: Deactivated successfully.
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.169505139 +0000 UTC m=+0.167947086 container attach ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.170061045 +0000 UTC m=+0.168502992 container died ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 09:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-236d90e54c5c6d88eb30265a443a58b4b336fa340584feaafba55eda7d72c615-merged.mount: Deactivated successfully.
Jan 27 09:11:54 compute-0 podman[275230]: 2026-01-27 09:11:54.249649882 +0000 UTC m=+0.248091829 container remove ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:11:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:11:54.252 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:11:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:11:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:11:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:11:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:11:54 compute-0 systemd[1]: libpod-conmon-ff7d1bd003356a3baa8dbd6b1fc0786c6c18efa334c3b0a82ec856c1f7a90f93.scope: Deactivated successfully.
Jan 27 09:11:54 compute-0 ceph-mon[74357]: pgmap v1464: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:11:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:11:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:11:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:11:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:11:54 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:11:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:54.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:54 compute-0 podman[275271]: 2026-01-27 09:11:54.450456497 +0000 UTC m=+0.053473624 container create bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:11:54 compute-0 systemd[1]: Started libpod-conmon-bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7.scope.
Jan 27 09:11:54 compute-0 podman[275271]: 2026-01-27 09:11:54.422528243 +0000 UTC m=+0.025545400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:11:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5f4d38ab3eebb7a542b01d99617d05dfe82d0d79de7a1923314fba22925c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5f4d38ab3eebb7a542b01d99617d05dfe82d0d79de7a1923314fba22925c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5f4d38ab3eebb7a542b01d99617d05dfe82d0d79de7a1923314fba22925c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5f4d38ab3eebb7a542b01d99617d05dfe82d0d79de7a1923314fba22925c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5f4d38ab3eebb7a542b01d99617d05dfe82d0d79de7a1923314fba22925c59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:54 compute-0 podman[275271]: 2026-01-27 09:11:54.546193666 +0000 UTC m=+0.149210833 container init bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:11:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:54 compute-0 podman[275271]: 2026-01-27 09:11:54.554073441 +0000 UTC m=+0.157090568 container start bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 09:11:54 compute-0 podman[275271]: 2026-01-27 09:11:54.557639659 +0000 UTC m=+0.160656796 container attach bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chaplygin, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:11:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:55 compute-0 ceph-mon[74357]: pgmap v1465: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:55 compute-0 vibrant_chaplygin[275288]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:11:55 compute-0 vibrant_chaplygin[275288]: --> relative data size: 1.0
Jan 27 09:11:55 compute-0 vibrant_chaplygin[275288]: --> All data devices are unavailable
Jan 27 09:11:55 compute-0 systemd[1]: libpod-bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7.scope: Deactivated successfully.
Jan 27 09:11:55 compute-0 conmon[275288]: conmon bd603209a43dc1d77612 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7.scope/container/memory.events
Jan 27 09:11:55 compute-0 podman[275271]: 2026-01-27 09:11:55.392305164 +0000 UTC m=+0.995322291 container died bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chaplygin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 09:11:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb5f4d38ab3eebb7a542b01d99617d05dfe82d0d79de7a1923314fba22925c59-merged.mount: Deactivated successfully.
Jan 27 09:11:55 compute-0 podman[275271]: 2026-01-27 09:11:55.450178158 +0000 UTC m=+1.053195285 container remove bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:11:55 compute-0 systemd[1]: libpod-conmon-bd603209a43dc1d77612b134be12a318f4facced83bf2e9f11db6172e6ff3dd7.scope: Deactivated successfully.
Jan 27 09:11:55 compute-0 sudo[275163]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:55 compute-0 sudo[275314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:55 compute-0 sudo[275314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:55 compute-0 sudo[275314]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:55 compute-0 sudo[275339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:11:55 compute-0 sudo[275339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:55 compute-0 sudo[275339]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:55 compute-0 sudo[275364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:55 compute-0 sudo[275364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:55 compute-0 sudo[275364]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:55 compute-0 sudo[275389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:11:55 compute-0 sudo[275389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:55 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.092160952 +0000 UTC m=+0.047906402 container create 1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 27 09:11:56 compute-0 systemd[1]: Started libpod-conmon-1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355.scope.
Jan 27 09:11:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.07052431 +0000 UTC m=+0.026269790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.17179831 +0000 UTC m=+0.127543810 container init 1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.18092151 +0000 UTC m=+0.136666970 container start 1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.185137195 +0000 UTC m=+0.140882665 container attach 1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:11:56 compute-0 condescending_swanson[275470]: 167 167
Jan 27 09:11:56 compute-0 systemd[1]: libpod-1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355.scope: Deactivated successfully.
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.188715763 +0000 UTC m=+0.144461203 container died 1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swanson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 27 09:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a9b1266c547a03ad7383833bf9f34564ca5b053ec9a6f639e7d143fda4d603-merged.mount: Deactivated successfully.
Jan 27 09:11:56 compute-0 podman[275454]: 2026-01-27 09:11:56.22367091 +0000 UTC m=+0.179416360 container remove 1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:11:56 compute-0 systemd[1]: libpod-conmon-1058ff22d3916ae10299644705bb64b86f21de462ecb5d64b55b5bccb7571355.scope: Deactivated successfully.
Jan 27 09:11:56 compute-0 podman[275494]: 2026-01-27 09:11:56.393838895 +0000 UTC m=+0.042088313 container create 3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_diffie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:11:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:56 compute-0 systemd[1]: Started libpod-conmon-3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2.scope.
Jan 27 09:11:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:11:56 compute-0 podman[275494]: 2026-01-27 09:11:56.376765808 +0000 UTC m=+0.025015256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:11:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f227ed56b502d560a5595e57fa008efae99bf96003125ca408c381bb168fc8e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f227ed56b502d560a5595e57fa008efae99bf96003125ca408c381bb168fc8e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f227ed56b502d560a5595e57fa008efae99bf96003125ca408c381bb168fc8e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f227ed56b502d560a5595e57fa008efae99bf96003125ca408c381bb168fc8e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:56 compute-0 podman[275494]: 2026-01-27 09:11:56.48831376 +0000 UTC m=+0.136563208 container init 3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_diffie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:11:56 compute-0 podman[275494]: 2026-01-27 09:11:56.494282163 +0000 UTC m=+0.142531581 container start 3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:11:56 compute-0 podman[275494]: 2026-01-27 09:11:56.498091788 +0000 UTC m=+0.146341226 container attach 3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:11:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:11:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:11:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:57 compute-0 exciting_diffie[275510]: {
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:     "0": [
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:         {
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "devices": [
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "/dev/loop3"
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             ],
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "lv_name": "ceph_lv0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "lv_size": "7511998464",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "name": "ceph_lv0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "tags": {
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.cluster_name": "ceph",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.crush_device_class": "",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.encrypted": "0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.osd_id": "0",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.type": "block",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:                 "ceph.vdo": "0"
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             },
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "type": "block",
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:             "vg_name": "ceph_vg0"
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:         }
Jan 27 09:11:57 compute-0 exciting_diffie[275510]:     ]
Jan 27 09:11:57 compute-0 exciting_diffie[275510]: }
Jan 27 09:11:57 compute-0 systemd[1]: libpod-3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2.scope: Deactivated successfully.
Jan 27 09:11:57 compute-0 podman[275494]: 2026-01-27 09:11:57.295033821 +0000 UTC m=+0.943283239 container died 3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_diffie, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f227ed56b502d560a5595e57fa008efae99bf96003125ca408c381bb168fc8e3-merged.mount: Deactivated successfully.
Jan 27 09:11:57 compute-0 podman[275494]: 2026-01-27 09:11:57.404428464 +0000 UTC m=+1.052677922 container remove 3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:11:57 compute-0 systemd[1]: libpod-conmon-3a1230ee43a2fd42f8af9cb5a5ecde4d4d7b3739e8e37713729670d4c117f9a2.scope: Deactivated successfully.
Jan 27 09:11:57 compute-0 sudo[275389]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:57 compute-0 sudo[275531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:57 compute-0 sudo[275531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:57 compute-0 sudo[275531]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:57 compute-0 sudo[275556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:11:57 compute-0 sudo[275556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:57 compute-0 sudo[275556]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:57 compute-0 sudo[275581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:57 compute-0 sudo[275581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:57 compute-0 sudo[275581]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:57 compute-0 sudo[275606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:11:57 compute-0 sudo[275606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:57 compute-0 ceph-mon[74357]: pgmap v1466: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.127350192 +0000 UTC m=+0.068819593 container create 169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:58 compute-0 systemd[1]: Started libpod-conmon-169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238.scope.
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.083816172 +0000 UTC m=+0.025285593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:11:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.217356614 +0000 UTC m=+0.158826025 container init 169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.226628319 +0000 UTC m=+0.168097720 container start 169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:11:58 compute-0 fervent_pike[275687]: 167 167
Jan 27 09:11:58 compute-0 systemd[1]: libpod-169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238.scope: Deactivated successfully.
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.296074838 +0000 UTC m=+0.237544239 container attach 169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pike, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.297433586 +0000 UTC m=+0.238902987 container died 169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pike, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c0ca622c0e204d6201da2559e819c218f1a66b902bcb3f6080cf2f24267c3e-merged.mount: Deactivated successfully.
Jan 27 09:11:58 compute-0 podman[275670]: 2026-01-27 09:11:58.369129608 +0000 UTC m=+0.310599009 container remove 169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:11:58 compute-0 systemd[1]: libpod-conmon-169d1fe8ac86e4900f7d57759dbd44c6254266d9b154ebfeb536f1b5330e1238.scope: Deactivated successfully.
Jan 27 09:11:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:11:58.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:58 compute-0 podman[275706]: 2026-01-27 09:11:58.522940375 +0000 UTC m=+0.136694150 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:11:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:11:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:11:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:11:58.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:11:58 compute-0 podman[275740]: 2026-01-27 09:11:58.61376956 +0000 UTC m=+0.061039531 container create fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 09:11:58 compute-0 systemd[1]: Started libpod-conmon-fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1.scope.
Jan 27 09:11:58 compute-0 podman[275740]: 2026-01-27 09:11:58.584701205 +0000 UTC m=+0.031971196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:11:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb70596d03134df8f336589b26f5f2d346079988b8c4a7905dde6e560293733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb70596d03134df8f336589b26f5f2d346079988b8c4a7905dde6e560293733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb70596d03134df8f336589b26f5f2d346079988b8c4a7905dde6e560293733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb70596d03134df8f336589b26f5f2d346079988b8c4a7905dde6e560293733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:58 compute-0 podman[275740]: 2026-01-27 09:11:58.713413466 +0000 UTC m=+0.160683457 container init fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 27 09:11:58 compute-0 podman[275740]: 2026-01-27 09:11:58.726020521 +0000 UTC m=+0.173290502 container start fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 09:11:58 compute-0 podman[275740]: 2026-01-27 09:11:58.734702439 +0000 UTC m=+0.181972420 container attach fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:11:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]: {
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:         "osd_id": 0,
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:         "type": "bluestore"
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]:     }
Jan 27 09:11:59 compute-0 priceless_hodgkin[275756]: }
Jan 27 09:11:59 compute-0 systemd[1]: libpod-fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1.scope: Deactivated successfully.
Jan 27 09:11:59 compute-0 podman[275740]: 2026-01-27 09:11:59.685859442 +0000 UTC m=+1.133129413 container died fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hodgkin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeb70596d03134df8f336589b26f5f2d346079988b8c4a7905dde6e560293733-merged.mount: Deactivated successfully.
Jan 27 09:11:59 compute-0 podman[275740]: 2026-01-27 09:11:59.788588372 +0000 UTC m=+1.235858363 container remove fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hodgkin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:11:59 compute-0 systemd[1]: libpod-conmon-fc4589f7e6dbde1990db2d3bfd1d64b5cceb0790c247739fb5047f48776efae1.scope: Deactivated successfully.
Jan 27 09:11:59 compute-0 sudo[275606]: pam_unix(sudo:session): session closed for user root
Jan 27 09:11:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:11:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:11:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:11:59 compute-0 ceph-mon[74357]: pgmap v1467: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:11:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2397335352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:11:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2397335352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:11:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:11:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d40e5e73-1850-4ac9-900f-c9f36d4d8d8d does not exist
Jan 27 09:11:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c16dddea-d6bb-4163-9115-f3becc4459b1 does not exist
Jan 27 09:11:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev cf39c53a-96ad-40af-9d4b-64c07fc4022a does not exist
Jan 27 09:11:59 compute-0 sudo[275791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:11:59 compute-0 sudo[275791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:11:59 compute-0 sudo[275791]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:00 compute-0 sudo[275816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:12:00 compute-0 sudo[275816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:00 compute-0 sudo[275816]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:12:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:12:02 compute-0 ceph-mon[74357]: pgmap v1468: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:04 compute-0 ceph-mon[74357]: pgmap v1469: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:04.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:04.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:05 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:12:05.560 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:12:05 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:12:05.562 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:12:05 compute-0 ceph-mon[74357]: pgmap v1470: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:06.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:07 compute-0 nova_compute[247671]: 2026-01-27 09:12:07.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:07 compute-0 nova_compute[247671]: 2026-01-27 09:12:07.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:12:08 compute-0 ceph-mon[74357]: pgmap v1471: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:12:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:12:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:09 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:12:09.565 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:12:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:10.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:11 compute-0 nova_compute[247671]: 2026-01-27 09:12:11.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:11 compute-0 nova_compute[247671]: 2026-01-27 09:12:11.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:11 compute-0 sudo[275847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:12:11 compute-0 sudo[275847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:11 compute-0 sudo[275847]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:11 compute-0 sudo[275872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:12:11 compute-0 sudo[275872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:11 compute-0 sudo[275872]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:12 compute-0 ceph-mon[74357]: pgmap v1472: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:12:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:12.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:12:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:13 compute-0 podman[275898]: 2026-01-27 09:12:13.262446982 +0000 UTC m=+0.078197920 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:12:13 compute-0 ceph-mon[74357]: pgmap v1473: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:13 compute-0 ceph-mon[74357]: pgmap v1474: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:13 compute-0 nova_compute[247671]: 2026-01-27 09:12:13.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:14.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:12:15
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'images', 'backups', 'default.rgw.meta', '.rgw.root']
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:12:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:12:15 compute-0 ceph-mon[74357]: pgmap v1475: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:16 compute-0 nova_compute[247671]: 2026-01-27 09:12:16.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:16 compute-0 nova_compute[247671]: 2026-01-27 09:12:16.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:12:16 compute-0 nova_compute[247671]: 2026-01-27 09:12:16.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:12:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:16 compute-0 nova_compute[247671]: 2026-01-27 09:12:16.445 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:12:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:17 compute-0 nova_compute[247671]: 2026-01-27 09:12:17.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:17 compute-0 ceph-mon[74357]: pgmap v1476: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.456 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.456 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.457 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.457 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.457 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:12:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:12:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263703981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:18 compute-0 nova_compute[247671]: 2026-01-27 09:12:18.923 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:12:18 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4263703981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.070 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.072 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.072 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.072 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.170 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.170 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.170 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.272 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:12:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:12:19 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944866476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.711 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.717 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.742 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.743 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:12:19 compute-0 nova_compute[247671]: 2026-01-27 09:12:19.744 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:12:19 compute-0 ceph-mon[74357]: pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:19 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1944866476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:20.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1970892963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:22 compute-0 ceph-mon[74357]: pgmap v1478: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2727277299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1021794888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2001876598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:12:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:22.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:24 compute-0 ceph-mon[74357]: pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:24.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:12:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:24.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:26 compute-0 ceph-mon[74357]: pgmap v1480: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:26 compute-0 nova_compute[247671]: 2026-01-27 09:12:26.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:26 compute-0 nova_compute[247671]: 2026-01-27 09:12:26.746 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:12:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:28 compute-0 ceph-mon[74357]: pgmap v1481: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:28.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:28.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:29 compute-0 podman[275969]: 2026-01-27 09:12:29.306741693 +0000 UTC m=+0.112495528 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 09:12:30 compute-0 ceph-mon[74357]: pgmap v1482: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:30.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:31 compute-0 sudo[275996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:12:31 compute-0 sudo[275996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:31 compute-0 sudo[275996]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:31 compute-0 sudo[276021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:12:31 compute-0 sudo[276021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:31 compute-0 sudo[276021]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:32 compute-0 ceph-mon[74357]: pgmap v1483: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:32.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:32.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:34.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:34 compute-0 ceph-mon[74357]: pgmap v1484: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:35 compute-0 ceph-mon[74357]: pgmap v1485: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:36.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:36.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:37 compute-0 ceph-mon[74357]: pgmap v1486: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:38.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:38.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:40 compute-0 ceph-mon[74357]: pgmap v1487: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:41 compute-0 ceph-mon[74357]: pgmap v1488: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:42.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:43 compute-0 ceph-mon[74357]: pgmap v1489: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:44 compute-0 podman[276052]: 2026-01-27 09:12:44.227842862 +0000 UTC m=+0.048307584 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 09:12:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:44.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:45 compute-0 ceph-mon[74357]: pgmap v1490: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:12:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:46.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:46.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:12:48 compute-0 ceph-mon[74357]: pgmap v1491: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:12:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:48.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:48.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:12:50 compute-0 ceph-mon[74357]: pgmap v1492: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 27 09:12:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:50.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:50.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:52 compute-0 sudo[276075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:12:52 compute-0 sudo[276075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:52 compute-0 sudo[276075]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:52 compute-0 sudo[276100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:12:52 compute-0 sudo[276100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:12:52 compute-0 ceph-mon[74357]: pgmap v1493: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:52 compute-0 sudo[276100]: pam_unix(sudo:session): session closed for user root
Jan 27 09:12:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:52.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:52.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:54 compute-0 ceph-mon[74357]: pgmap v1494: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:12:54.253 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:12:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:12:54.253 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:12:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:12:54.253 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:12:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:54.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:56 compute-0 ceph-mon[74357]: pgmap v1495: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 22 KiB/s wr, 5 op/s
Jan 27 09:12:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:12:58 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/944061257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:12:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:12:58 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/944061257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:12:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:12:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:12:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:12:58 compute-0 ceph-mon[74357]: pgmap v1496: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 22 KiB/s wr, 5 op/s
Jan 27 09:12:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/944061257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:12:58 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/944061257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:12:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:12:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:12:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:12:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:12:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:12:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/112858790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:12:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/112858790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:00 compute-0 podman[276129]: 2026-01-27 09:13:00.261996377 +0000 UTC m=+0.081424761 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 09:13:00 compute-0 sudo[276156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:00 compute-0 sudo[276156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:00 compute-0 sudo[276156]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:00 compute-0 sudo[276181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:13:00 compute-0 sudo[276181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:00 compute-0 sudo[276181]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:00.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:00 compute-0 sudo[276206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:00 compute-0 ceph-mon[74357]: pgmap v1497: 305 pgs: 305 active+clean; 42 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 27 09:13:00 compute-0 sudo[276206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:00 compute-0 sudo[276206]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:00 compute-0 sudo[276232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:13:00 compute-0 sudo[276232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 22 KiB/s wr, 18 op/s
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:01 compute-0 sudo[276232]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:13:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:13:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:13:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:13:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f5ac9aab-b333-49ce-883c-a6aafc8b1551 does not exist
Jan 27 09:13:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 9ca1ffd4-727f-40d3-9133-db0cddb1f750 does not exist
Jan 27 09:13:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3d4182e8-9c57-4323-8b08-d9d2a0bbf3b7 does not exist
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:13:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:13:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:13:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:13:01 compute-0 sudo[276290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:01 compute-0 sudo[276290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:01 compute-0 sudo[276290]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:01 compute-0 sudo[276315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:13:01 compute-0 sudo[276315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:01 compute-0 sudo[276315]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:01 compute-0 sudo[276340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:01 compute-0 sudo[276340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:01 compute-0 sudo[276340]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:01 compute-0 sudo[276365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:13:01 compute-0 sudo[276365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:13:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:13:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.825375806 +0000 UTC m=+0.055331056 container create 2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gould, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:13:01 compute-0 systemd[1]: Started libpod-conmon-2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db.scope.
Jan 27 09:13:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.806624623 +0000 UTC m=+0.036579903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.916286155 +0000 UTC m=+0.146241415 container init 2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.922942847 +0000 UTC m=+0.152898097 container start 2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.926373081 +0000 UTC m=+0.156328331 container attach 2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:13:01 compute-0 confident_gould[276445]: 167 167
Jan 27 09:13:01 compute-0 systemd[1]: libpod-2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db.scope: Deactivated successfully.
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.929796716 +0000 UTC m=+0.159751976 container died 2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gould, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9581ff86a128d0a813b43b58ecaa52354f128dbd6e6f696cf4b08775ae782b00-merged.mount: Deactivated successfully.
Jan 27 09:13:01 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:01.953 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:13:01 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:01.956 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:13:01 compute-0 podman[276429]: 2026-01-27 09:13:01.965073791 +0000 UTC m=+0.195029041 container remove 2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:13:01 compute-0 systemd[1]: libpod-conmon-2e971dad78fcd5b332ec8b19f57d18f21944609792e401131f3cca62bb7804db.scope: Deactivated successfully.
Jan 27 09:13:02 compute-0 podman[276469]: 2026-01-27 09:13:02.153241243 +0000 UTC m=+0.065009131 container create 29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 09:13:02 compute-0 systemd[1]: Started libpod-conmon-29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed.scope.
Jan 27 09:13:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:13:02 compute-0 podman[276469]: 2026-01-27 09:13:02.121799092 +0000 UTC m=+0.033567060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccbd27b08a58384bf3076516c23d8cc3062fa56c9cb572a3a16337d583a2a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccbd27b08a58384bf3076516c23d8cc3062fa56c9cb572a3a16337d583a2a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccbd27b08a58384bf3076516c23d8cc3062fa56c9cb572a3a16337d583a2a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccbd27b08a58384bf3076516c23d8cc3062fa56c9cb572a3a16337d583a2a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccbd27b08a58384bf3076516c23d8cc3062fa56c9cb572a3a16337d583a2a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:02 compute-0 podman[276469]: 2026-01-27 09:13:02.224718631 +0000 UTC m=+0.136486539 container init 29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:13:02 compute-0 podman[276469]: 2026-01-27 09:13:02.233677606 +0000 UTC m=+0.145445494 container start 29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:02 compute-0 podman[276469]: 2026-01-27 09:13:02.237557962 +0000 UTC m=+0.149325870 container attach 29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:13:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:02.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:02 compute-0 ceph-mon[74357]: pgmap v1498: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 22 KiB/s wr, 18 op/s
Jan 27 09:13:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:13:03 compute-0 boring_turing[276485]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:13:03 compute-0 boring_turing[276485]: --> relative data size: 1.0
Jan 27 09:13:03 compute-0 boring_turing[276485]: --> All data devices are unavailable
Jan 27 09:13:03 compute-0 systemd[1]: libpod-29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed.scope: Deactivated successfully.
Jan 27 09:13:03 compute-0 podman[276501]: 2026-01-27 09:13:03.068050083 +0000 UTC m=+0.028174432 container died 29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddccbd27b08a58384bf3076516c23d8cc3062fa56c9cb572a3a16337d583a2a4-merged.mount: Deactivated successfully.
Jan 27 09:13:03 compute-0 podman[276501]: 2026-01-27 09:13:03.114963468 +0000 UTC m=+0.075087767 container remove 29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:13:03 compute-0 systemd[1]: libpod-conmon-29e0d3d58808807f44c90e1210b4fd078f93dbaf85594b03fe9da5320aee4eed.scope: Deactivated successfully.
Jan 27 09:13:03 compute-0 sudo[276365]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:03 compute-0 sudo[276515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:03 compute-0 sudo[276515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:03 compute-0 sudo[276515]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:03 compute-0 sudo[276540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:13:03 compute-0 sudo[276540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:03 compute-0 sudo[276540]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:03 compute-0 sudo[276565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:03 compute-0 sudo[276565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:03 compute-0 sudo[276565]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:03 compute-0 sudo[276590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:13:03 compute-0 sudo[276590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:03 compute-0 podman[276655]: 2026-01-27 09:13:03.761528451 +0000 UTC m=+0.047152091 container create f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chatterjee, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 09:13:03 compute-0 podman[276655]: 2026-01-27 09:13:03.733951647 +0000 UTC m=+0.019575307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:13:03 compute-0 systemd[1]: Started libpod-conmon-f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402.scope.
Jan 27 09:13:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:13:03 compute-0 podman[276655]: 2026-01-27 09:13:03.947448713 +0000 UTC m=+0.233072373 container init f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:13:03 compute-0 podman[276655]: 2026-01-27 09:13:03.956087429 +0000 UTC m=+0.241711069 container start f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chatterjee, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:13:03 compute-0 flamboyant_chatterjee[276672]: 167 167
Jan 27 09:13:03 compute-0 systemd[1]: libpod-f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402.scope: Deactivated successfully.
Jan 27 09:13:04 compute-0 ceph-mon[74357]: pgmap v1499: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:13:04 compute-0 podman[276655]: 2026-01-27 09:13:04.166354187 +0000 UTC m=+0.451977827 container attach f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 27 09:13:04 compute-0 podman[276655]: 2026-01-27 09:13:04.167713144 +0000 UTC m=+0.453336784 container died f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-751d5550817faffa41133d83bc88d3ac9bc5ea40ddd697d0efffa3a933827b88-merged.mount: Deactivated successfully.
Jan 27 09:13:04 compute-0 podman[276655]: 2026-01-27 09:13:04.436054662 +0000 UTC m=+0.721678302 container remove f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chatterjee, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:13:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:04 compute-0 systemd[1]: libpod-conmon-f63e22f1512a4d15a6a78cf77fa42eb26bbe99e0546635bc04a8ad099edfb402.scope: Deactivated successfully.
Jan 27 09:13:04 compute-0 podman[276699]: 2026-01-27 09:13:04.645528638 +0000 UTC m=+0.096738730 container create f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 27 09:13:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:04 compute-0 podman[276699]: 2026-01-27 09:13:04.570687189 +0000 UTC m=+0.021897301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:13:04 compute-0 systemd[1]: Started libpod-conmon-f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a.scope.
Jan 27 09:13:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011cc8246691614516fdb413fed181b65324c65aeb930c66be0b0fac2f2694b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011cc8246691614516fdb413fed181b65324c65aeb930c66be0b0fac2f2694b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011cc8246691614516fdb413fed181b65324c65aeb930c66be0b0fac2f2694b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011cc8246691614516fdb413fed181b65324c65aeb930c66be0b0fac2f2694b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:04 compute-0 podman[276699]: 2026-01-27 09:13:04.760510806 +0000 UTC m=+0.211720918 container init f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 09:13:04 compute-0 podman[276699]: 2026-01-27 09:13:04.766228133 +0000 UTC m=+0.217438235 container start f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:13:04 compute-0 podman[276699]: 2026-01-27 09:13:04.769665057 +0000 UTC m=+0.220875159 container attach f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:13:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]: {
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:     "0": [
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:         {
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "devices": [
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "/dev/loop3"
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             ],
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "lv_name": "ceph_lv0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "lv_size": "7511998464",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "name": "ceph_lv0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "tags": {
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.cluster_name": "ceph",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.crush_device_class": "",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.encrypted": "0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.osd_id": "0",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.type": "block",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:                 "ceph.vdo": "0"
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             },
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "type": "block",
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:             "vg_name": "ceph_vg0"
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:         }
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]:     ]
Jan 27 09:13:05 compute-0 infallible_chebyshev[276716]: }
Jan 27 09:13:05 compute-0 systemd[1]: libpod-f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a.scope: Deactivated successfully.
Jan 27 09:13:05 compute-0 podman[276725]: 2026-01-27 09:13:05.584000705 +0000 UTC m=+0.024778639 container died f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8011cc8246691614516fdb413fed181b65324c65aeb930c66be0b0fac2f2694b-merged.mount: Deactivated successfully.
Jan 27 09:13:05 compute-0 podman[276725]: 2026-01-27 09:13:05.632950515 +0000 UTC m=+0.073728419 container remove f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:13:05 compute-0 systemd[1]: libpod-conmon-f5a3f1910f3761d4badf12fc843e67cb0cdb4d278be0799d8956dfef48d2c27a.scope: Deactivated successfully.
Jan 27 09:13:05 compute-0 sudo[276590]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:05 compute-0 sudo[276740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:05 compute-0 sudo[276740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:05 compute-0 sudo[276740]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:05 compute-0 sudo[276765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:13:05 compute-0 sudo[276765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:05 compute-0 sudo[276765]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:05 compute-0 sudo[276790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:05 compute-0 sudo[276790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:05 compute-0 sudo[276790]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:05 compute-0 sudo[276815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:13:05 compute-0 sudo[276815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:06 compute-0 ceph-mon[74357]: pgmap v1500: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.241205132 +0000 UTC m=+0.048784158 container create 7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:13:06 compute-0 systemd[1]: Started libpod-conmon-7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71.scope.
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.217092961 +0000 UTC m=+0.024672007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:13:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.338596578 +0000 UTC m=+0.146175634 container init 7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.347219774 +0000 UTC m=+0.154798820 container start 7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 09:13:06 compute-0 awesome_antonelli[276898]: 167 167
Jan 27 09:13:06 compute-0 systemd[1]: libpod-7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71.scope: Deactivated successfully.
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.365845484 +0000 UTC m=+0.173424540 container attach 7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.366359978 +0000 UTC m=+0.173939024 container died 7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 27 09:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-89c7ba5c1b26ab114d90f1e43dbbb1eb2e0de0a489eb77090488084ad69ecf19-merged.mount: Deactivated successfully.
Jan 27 09:13:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:06.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:06 compute-0 podman[276881]: 2026-01-27 09:13:06.638476459 +0000 UTC m=+0.446055485 container remove 7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:13:06 compute-0 systemd[1]: libpod-conmon-7010c4e06d68dc45e39555609dddc8296d968d3859c8f49f125365da10da2b71.scope: Deactivated successfully.
Jan 27 09:13:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:06 compute-0 podman[276924]: 2026-01-27 09:13:06.812160995 +0000 UTC m=+0.043959254 container create f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wilbur, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:13:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:13:06 compute-0 systemd[1]: Started libpod-conmon-f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e.scope.
Jan 27 09:13:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac21a6a4dff80601c6bc5d6e0b67280b56329f2a5bccf24278753bf5ede715f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac21a6a4dff80601c6bc5d6e0b67280b56329f2a5bccf24278753bf5ede715f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac21a6a4dff80601c6bc5d6e0b67280b56329f2a5bccf24278753bf5ede715f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac21a6a4dff80601c6bc5d6e0b67280b56329f2a5bccf24278753bf5ede715f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:06 compute-0 podman[276924]: 2026-01-27 09:13:06.789273548 +0000 UTC m=+0.021071827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:13:06 compute-0 podman[276924]: 2026-01-27 09:13:06.896057692 +0000 UTC m=+0.127855971 container init f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:13:06 compute-0 podman[276924]: 2026-01-27 09:13:06.902659463 +0000 UTC m=+0.134457722 container start f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wilbur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:13:06 compute-0 podman[276924]: 2026-01-27 09:13:06.906008355 +0000 UTC m=+0.137806614 container attach f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]: {
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:         "osd_id": 0,
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:         "type": "bluestore"
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]:     }
Jan 27 09:13:07 compute-0 infallible_wilbur[276940]: }
Jan 27 09:13:07 compute-0 systemd[1]: libpod-f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e.scope: Deactivated successfully.
Jan 27 09:13:07 compute-0 podman[276961]: 2026-01-27 09:13:07.845667664 +0000 UTC m=+0.029396575 container died f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac21a6a4dff80601c6bc5d6e0b67280b56329f2a5bccf24278753bf5ede715f0-merged.mount: Deactivated successfully.
Jan 27 09:13:07 compute-0 podman[276961]: 2026-01-27 09:13:07.895050217 +0000 UTC m=+0.078779128 container remove f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:13:07 compute-0 systemd[1]: libpod-conmon-f2a53a6acdb12589c00f4ffc593af0a084d9fe44dd543fd832e098768f19c65e.scope: Deactivated successfully.
Jan 27 09:13:07 compute-0 sudo[276815]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:13:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:13:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:13:08 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:13:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5d0acd9c-e923-48ec-8be3-92d27921ccf4 does not exist
Jan 27 09:13:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 8d832506-4c87-4886-9b89-74d75e9c0247 does not exist
Jan 27 09:13:08 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 12c253a0-d5a7-4d7f-8b4e-4525172689ab does not exist
Jan 27 09:13:08 compute-0 sudo[276976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:08 compute-0 sudo[276976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:08 compute-0 sudo[276976]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:08 compute-0 sudo[277001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:13:08 compute-0 sudo[277001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:08 compute-0 sudo[277001]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:08 compute-0 ceph-mon[74357]: pgmap v1501: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:13:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:13:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:13:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:08.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:13:09 compute-0 nova_compute[247671]: 2026-01-27 09:13:09.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:09 compute-0 nova_compute[247671]: 2026-01-27 09:13:09.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:13:09 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:09.959 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:13:10 compute-0 ceph-mon[74357]: pgmap v1502: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:13:10 compute-0 nova_compute[247671]: 2026-01-27 09:13:10.419 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:10.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 852 B/s wr, 22 op/s
Jan 27 09:13:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:11 compute-0 nova_compute[247671]: 2026-01-27 09:13:11.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:11 compute-0 nova_compute[247671]: 2026-01-27 09:13:11.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:12 compute-0 sudo[277028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:12 compute-0 sudo[277028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:12 compute-0 sudo[277028]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:12 compute-0 sudo[277053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:12 compute-0 sudo[277053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:12 compute-0 sudo[277053]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:12.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:12 compute-0 ceph-mon[74357]: pgmap v1503: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 852 B/s wr, 22 op/s
Jan 27 09:13:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Jan 27 09:13:14 compute-0 nova_compute[247671]: 2026-01-27 09:13:14.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:14.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 27 09:13:14 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 27 09:13:14 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 27 09:13:14 compute-0 ceph-mon[74357]: pgmap v1504: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Jan 27 09:13:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 511 B/s wr, 9 op/s
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:13:15
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta']
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:13:15 compute-0 podman[277080]: 2026-01-27 09:13:15.2430685 +0000 UTC m=+0.055198422 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:13:15 compute-0 ceph-mon[74357]: osdmap e165: 3 total, 3 up, 3 in
Jan 27 09:13:15 compute-0 ceph-mgr[74650]: client.0 ms_handle_reset on v2:192.168.122.100:6800/510010839
Jan 27 09:13:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:16 compute-0 nova_compute[247671]: 2026-01-27 09:13:16.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:16 compute-0 nova_compute[247671]: 2026-01-27 09:13:16.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:13:16 compute-0 nova_compute[247671]: 2026-01-27 09:13:16.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:13:16 compute-0 nova_compute[247671]: 2026-01-27 09:13:16.457 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:13:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:16.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:16 compute-0 ceph-mon[74357]: pgmap v1506: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 511 B/s wr, 9 op/s
Jan 27 09:13:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.449 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.449 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:13:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:18.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 09:13:18 compute-0 ceph-mon[74357]: pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 09:13:18 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:13:18 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533589187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:18 compute-0 nova_compute[247671]: 2026-01-27 09:13:18.967 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.109 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.110 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.110 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.111 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.238 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.239 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.239 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.277 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:13:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:13:19 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255145266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.716 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.720 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.742 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.743 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:13:19 compute-0 nova_compute[247671]: 2026-01-27 09:13:19.744 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:13:20 compute-0 ceph-mon[74357]: pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 09:13:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2533589187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3255145266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:20.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:20.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 486 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 27 09:13:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.091371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505201091471, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1456, "num_deletes": 251, "total_data_size": 2492575, "memory_usage": 2525624, "flush_reason": "Manual Compaction"}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 27 09:13:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4090380812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1911560675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505201184537, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2452929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32273, "largest_seqno": 33728, "table_properties": {"data_size": 2446213, "index_size": 3851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14295, "raw_average_key_size": 20, "raw_value_size": 2432616, "raw_average_value_size": 3416, "num_data_blocks": 171, "num_entries": 712, "num_filter_entries": 712, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769505056, "oldest_key_time": 1769505056, "file_creation_time": 1769505201, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 93157 microseconds, and 5980 cpu microseconds.
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.184581) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2452929 bytes OK
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.184597) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.207448) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.207471) EVENT_LOG_v1 {"time_micros": 1769505201207464, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.207485) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2486380, prev total WAL file size 2486380, number of live WAL files 2.
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.208311) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2395KB)], [71(8227KB)]
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505201208385, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10878044, "oldest_snapshot_seqno": -1}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5614 keys, 8900746 bytes, temperature: kUnknown
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505201270819, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8900746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8864025, "index_size": 21571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 144231, "raw_average_key_size": 25, "raw_value_size": 8763348, "raw_average_value_size": 1560, "num_data_blocks": 871, "num_entries": 5614, "num_filter_entries": 5614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769505201, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.271080) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8900746 bytes
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.273117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.0 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(8.1) write-amplify(3.6) OK, records in: 6133, records dropped: 519 output_compression: NoCompression
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.273137) EVENT_LOG_v1 {"time_micros": 1769505201273128, "job": 40, "event": "compaction_finished", "compaction_time_micros": 62515, "compaction_time_cpu_micros": 19424, "output_level": 6, "num_output_files": 1, "total_output_size": 8900746, "num_input_records": 6133, "num_output_records": 5614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505201273719, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505201275622, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.208206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.275663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.275667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.275669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.275671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:13:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:13:21.275672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:13:22 compute-0 ceph-mon[74357]: pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 486 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 27 09:13:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4259108392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4076861848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:13:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:22.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 27 09:13:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 27 09:13:24 compute-0 ceph-mon[74357]: pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 27 09:13:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 27 09:13:24 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 27 09:13:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009891035036292573 of space, bias 1.0, pg target 0.2967310510887772 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:13:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 27 09:13:25 compute-0 ceph-mon[74357]: osdmap e166: 3 total, 3 up, 3 in
Jan 27 09:13:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:26 compute-0 ceph-mon[74357]: pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 27 09:13:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1384749459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1384749459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:26.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 74 MiB data, 295 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 27 09:13:27 compute-0 nova_compute[247671]: 2026-01-27 09:13:27.744 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:27 compute-0 nova_compute[247671]: 2026-01-27 09:13:27.745 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:13:28 compute-0 ceph-mon[74357]: pgmap v1513: 305 pgs: 305 active+clean; 74 MiB data, 295 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 27 09:13:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:13:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:28.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:13:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 74 MiB data, 295 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 27 09:13:30 compute-0 ceph-mon[74357]: pgmap v1514: 305 pgs: 305 active+clean; 74 MiB data, 295 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 27 09:13:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:30.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 62 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Jan 27 09:13:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 27 09:13:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 27 09:13:31 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 27 09:13:31 compute-0 podman[277151]: 2026-01-27 09:13:31.302146731 +0000 UTC m=+0.107512365 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:13:32 compute-0 ceph-mon[74357]: pgmap v1515: 305 pgs: 305 active+clean; 62 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Jan 27 09:13:32 compute-0 ceph-mon[74357]: osdmap e167: 3 total, 3 up, 3 in
Jan 27 09:13:32 compute-0 sudo[277178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:32 compute-0 sudo[277178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:32 compute-0 sudo[277178]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:32 compute-0 sudo[277203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:32 compute-0 sudo[277203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:32 compute-0 sudo[277203]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 62 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.4 MiB/s wr, 83 op/s
Jan 27 09:13:34 compute-0 ceph-mon[74357]: pgmap v1517: 305 pgs: 305 active+clean; 62 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.4 MiB/s wr, 83 op/s
Jan 27 09:13:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:34.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 62 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 72 op/s
Jan 27 09:13:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:36 compute-0 ceph-mon[74357]: pgmap v1518: 305 pgs: 305 active+clean; 62 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 72 op/s
Jan 27 09:13:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:36.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:36.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.7 MiB/s wr, 113 op/s
Jan 27 09:13:38 compute-0 ceph-mon[74357]: pgmap v1519: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.7 MiB/s wr, 113 op/s
Jan 27 09:13:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:38.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.7 MiB/s wr, 113 op/s
Jan 27 09:13:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:40.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:40 compute-0 ceph-mon[74357]: pgmap v1520: 305 pgs: 305 active+clean; 134 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.7 MiB/s wr, 113 op/s
Jan 27 09:13:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 214 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.0 MiB/s wr, 166 op/s
Jan 27 09:13:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:41 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:41.627 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:13:41 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:41.628 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:13:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:42.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:42 compute-0 ceph-mon[74357]: pgmap v1521: 305 pgs: 305 active+clean; 214 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.0 MiB/s wr, 166 op/s
Jan 27 09:13:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.2 MiB/s wr, 154 op/s
Jan 27 09:13:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:44 compute-0 ceph-mon[74357]: pgmap v1522: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.2 MiB/s wr, 154 op/s
Jan 27 09:13:44 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1622710965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:44 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1622710965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:44.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 151 op/s
Jan 27 09:13:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:46 compute-0 podman[277235]: 2026-01-27 09:13:46.241580652 +0000 UTC m=+0.055005687 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 09:13:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:46.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:46 compute-0 ceph-mon[74357]: pgmap v1523: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 151 op/s
Jan 27 09:13:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:46.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 201 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 167 op/s
Jan 27 09:13:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:48.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:48 compute-0 ceph-mon[74357]: pgmap v1524: 305 pgs: 305 active+clean; 201 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 167 op/s
Jan 27 09:13:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:48.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 201 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Jan 27 09:13:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:50.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:50.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:50 compute-0 ceph-mon[74357]: pgmap v1525: 305 pgs: 305 active+clean; 201 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Jan 27 09:13:50 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/159296740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:50 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/159296740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 170 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 4.0 MiB/s wr, 106 op/s
Jan 27 09:13:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:51 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:51.630 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:13:52 compute-0 ceph-mon[74357]: pgmap v1526: 305 pgs: 305 active+clean; 170 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 4.0 MiB/s wr, 106 op/s
Jan 27 09:13:52 compute-0 sudo[277258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:52 compute-0 sudo[277258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:52 compute-0 sudo[277258]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:52 compute-0 sudo[277283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:13:52 compute-0 sudo[277283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:13:52 compute-0 sudo[277283]: pam_unix(sudo:session): session closed for user root
Jan 27 09:13:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:52.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:52.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 154 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 1.2 MiB/s wr, 52 op/s
Jan 27 09:13:53 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2049725173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:53 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2049725173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:13:53 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2379388668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:53 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:13:53 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2379388668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:54 compute-0 ceph-mon[74357]: pgmap v1527: 305 pgs: 305 active+clean; 154 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 1.2 MiB/s wr, 52 op/s
Jan 27 09:13:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2379388668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2379388668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:54.254 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:13:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:54.254 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:13:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:13:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:13:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:54.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:54.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 154 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 27 09:13:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2172866641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2172866641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:56 compute-0 ceph-mon[74357]: pgmap v1528: 305 pgs: 305 active+clean; 154 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 27 09:13:56 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2003090559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:56 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2003090559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:13:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:56.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:13:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 62 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.2 KiB/s wr, 95 op/s
Jan 27 09:13:58 compute-0 ceph-mon[74357]: pgmap v1529: 305 pgs: 305 active+clean; 62 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.2 KiB/s wr, 95 op/s
Jan 27 09:13:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:13:58 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723571190' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:58 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:13:58 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723571190' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:13:58.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:13:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:13:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:13:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:13:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 62 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 KiB/s wr, 79 op/s
Jan 27 09:13:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:13:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1697872854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:13:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1697872854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/723571190' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/723571190' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:13:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1697872854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:13:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1697872854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:00 compute-0 ceph-mon[74357]: pgmap v1530: 305 pgs: 305 active+clean; 62 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 KiB/s wr, 79 op/s
Jan 27 09:14:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 95 MiB data, 292 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Jan 27 09:14:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1801410268' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:14:01 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1801410268' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:02 compute-0 podman[277313]: 2026-01-27 09:14:02.249576563 +0000 UTC m=+0.070042089 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 09:14:02 compute-0 ceph-mon[74357]: pgmap v1531: 305 pgs: 305 active+clean; 95 MiB data, 292 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Jan 27 09:14:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 27 09:14:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/995903882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:14:03 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/995903882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:04 compute-0 ceph-mon[74357]: pgmap v1532: 305 pgs: 305 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 27 09:14:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2558446329' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:14:04 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2558446329' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:04.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 27 09:14:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 27 09:14:05 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 27 09:14:05 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 27 09:14:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:06 compute-0 ceph-mon[74357]: pgmap v1533: 305 pgs: 305 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 27 09:14:06 compute-0 ceph-mon[74357]: osdmap e168: 3 total, 3 up, 3 in
Jan 27 09:14:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:06.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 27 09:14:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:14:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099144685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:14:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:14:07 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099144685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:08 compute-0 ceph-mon[74357]: pgmap v1535: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 27 09:14:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3099144685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:14:08 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3099144685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:08 compute-0 sudo[277343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:08 compute-0 sudo[277343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:08 compute-0 sudo[277343]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:08 compute-0 sudo[277368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:14:08 compute-0 sudo[277368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:08 compute-0 sudo[277368]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:08.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:08 compute-0 sudo[277393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:08 compute-0 sudo[277393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:08 compute-0 sudo[277393]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:08.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:08 compute-0 sudo[277418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 27 09:14:08 compute-0 sudo[277418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 27 09:14:09 compute-0 podman[277508]: 2026-01-27 09:14:09.204860541 +0000 UTC m=+0.066600494 container exec b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:09 compute-0 podman[277508]: 2026-01-27 09:14:09.312329535 +0000 UTC m=+0.174069498 container exec_died b81872c9cb50611aa61acc285396bc5e4c7fdd19b851baf36614c1baccbb29f8 (image=quay.io/ceph/ceph:v18, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 09:14:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 09:14:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:09 compute-0 podman[277642]: 2026-01-27 09:14:09.858652083 +0000 UTC m=+0.047691277 container exec 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 09:14:09 compute-0 podman[277642]: 2026-01-27 09:14:09.864464272 +0000 UTC m=+0.053503436 container exec_died 7365d2264f9cec5a9a669dadd0a5a6177915bbb9199c33bb0acbb32c5c368d76 (image=quay.io/ceph/haproxy:2.3, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-haproxy-rgw-default-compute-0-njrjkb)
Jan 27 09:14:10 compute-0 podman[277706]: 2026-01-27 09:14:10.039855185 +0000 UTC m=+0.044110819 container exec eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=)
Jan 27 09:14:10 compute-0 podman[277727]: 2026-01-27 09:14:10.106084818 +0000 UTC m=+0.047796890 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, name=keepalived, release=1793, version=2.2.4, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived)
Jan 27 09:14:10 compute-0 podman[277706]: 2026-01-27 09:14:10.112232676 +0000 UTC m=+0.116488310 container exec_died eb32867dcc87ae2e03f1a429c9ea460f98c4b70a3dcff6efde76b69ef7b8882e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-keepalived-rgw-default-compute-0-knqeph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.expose-services=, description=keepalived for Ceph, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 27 09:14:10 compute-0 sudo[277418]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 sudo[277756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:10 compute-0 sudo[277756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:10 compute-0 sudo[277756]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:10 compute-0 sudo[277781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:14:10 compute-0 sudo[277781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:10 compute-0 sudo[277781]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:10 compute-0 sudo[277806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:10 compute-0 sudo[277806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:10 compute-0 sudo[277806]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:10 compute-0 sudo[277831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:14:10 compute-0 sudo[277831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:10 compute-0 ceph-mon[74357]: pgmap v1536: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 108 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 27 09:14:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:10.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 639 KiB/s wr, 97 op/s
Jan 27 09:14:10 compute-0 sudo[277831]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:10 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b187d8fc-2218-4936-b0ba-bc7644ac711f does not exist
Jan 27 09:14:10 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0fc8f19b-ba5b-43c0-9d62-5ad698e5c329 does not exist
Jan 27 09:14:10 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7b81f967-dc1b-4e28-82a6-9e07c026bb39 does not exist
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:14:10 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:14:10 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:14:10 compute-0 sudo[277888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:10 compute-0 sudo[277888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:10 compute-0 sudo[277888]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:11 compute-0 sudo[277913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:14:11 compute-0 sudo[277913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:11 compute-0 sudo[277913]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:11 compute-0 sudo[277938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:11 compute-0 sudo[277938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:11 compute-0 sudo[277938]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:11 compute-0 sudo[277963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:14:11 compute-0 sudo[277963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:11 compute-0 nova_compute[247671]: 2026-01-27 09:14:11.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:11 compute-0 nova_compute[247671]: 2026-01-27 09:14:11.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:11 compute-0 nova_compute[247671]: 2026-01-27 09:14:11.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.461838021 +0000 UTC m=+0.042697620 container create ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:11 compute-0 systemd[1]: Started libpod-conmon-ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e.scope.
Jan 27 09:14:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.530856901 +0000 UTC m=+0.111716500 container init ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.537078511 +0000 UTC m=+0.117938110 container start ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.443810118 +0000 UTC m=+0.024669747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.53994735 +0000 UTC m=+0.120806949 container attach ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 09:14:11 compute-0 elegant_spence[278046]: 167 167
Jan 27 09:14:11 compute-0 systemd[1]: libpod-ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e.scope: Deactivated successfully.
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.542786868 +0000 UTC m=+0.123646477 container died ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 27 09:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5785db7080d7cbd599cf4e51c490528eeb28cc0519ebb81bd08973d91c70b1a2-merged.mount: Deactivated successfully.
Jan 27 09:14:11 compute-0 podman[278030]: 2026-01-27 09:14:11.587995235 +0000 UTC m=+0.168854834 container remove ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 09:14:11 compute-0 systemd[1]: libpod-conmon-ba4c6592c4fc28c9f052af3acfb6caed2cb5a74c159f1a81c07281cb6b64298e.scope: Deactivated successfully.
Jan 27 09:14:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:14:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:14:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:14:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:14:11 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:14:11 compute-0 podman[278070]: 2026-01-27 09:14:11.740850131 +0000 UTC m=+0.042146515 container create 6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 27 09:14:11 compute-0 systemd[1]: Started libpod-conmon-6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074.scope.
Jan 27 09:14:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0961621df060eb7f584e98bd14501a8b3a24c29c027cea7fb6b5c2e59058c978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0961621df060eb7f584e98bd14501a8b3a24c29c027cea7fb6b5c2e59058c978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0961621df060eb7f584e98bd14501a8b3a24c29c027cea7fb6b5c2e59058c978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0961621df060eb7f584e98bd14501a8b3a24c29c027cea7fb6b5c2e59058c978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0961621df060eb7f584e98bd14501a8b3a24c29c027cea7fb6b5c2e59058c978/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:11 compute-0 podman[278070]: 2026-01-27 09:14:11.817900821 +0000 UTC m=+0.119197245 container init 6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:14:11 compute-0 podman[278070]: 2026-01-27 09:14:11.721189043 +0000 UTC m=+0.022485437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:14:11 compute-0 podman[278070]: 2026-01-27 09:14:11.824517042 +0000 UTC m=+0.125813426 container start 6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 09:14:11 compute-0 podman[278070]: 2026-01-27 09:14:11.827015441 +0000 UTC m=+0.128311835 container attach 6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:14:12 compute-0 sudo[278098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:12 compute-0 sudo[278098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:12 compute-0 sudo[278098]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:12.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:12 compute-0 sweet_euler[278086]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:14:12 compute-0 sweet_euler[278086]: --> relative data size: 1.0
Jan 27 09:14:12 compute-0 sweet_euler[278086]: --> All data devices are unavailable
Jan 27 09:14:12 compute-0 systemd[1]: libpod-6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074.scope: Deactivated successfully.
Jan 27 09:14:12 compute-0 podman[278070]: 2026-01-27 09:14:12.72939234 +0000 UTC m=+1.030688714 container died 6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:14:12 compute-0 sudo[278127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:12 compute-0 sudo[278127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:12 compute-0 sudo[278127]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:12.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:12 compute-0 ceph-mon[74357]: pgmap v1537: 305 pgs: 305 active+clean; 55 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 639 KiB/s wr, 97 op/s
Jan 27 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0961621df060eb7f584e98bd14501a8b3a24c29c027cea7fb6b5c2e59058c978-merged.mount: Deactivated successfully.
Jan 27 09:14:12 compute-0 podman[278070]: 2026-01-27 09:14:12.780098828 +0000 UTC m=+1.081395212 container remove 6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:14:12 compute-0 systemd[1]: libpod-conmon-6eee767e1b27191049969583d416511042fade45984ef9b27f391e5b0d57e074.scope: Deactivated successfully.
Jan 27 09:14:12 compute-0 sudo[277963]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:14:12 compute-0 sudo[278164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:12 compute-0 sudo[278164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:12 compute-0 sudo[278164]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:12 compute-0 sudo[278189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:14:12 compute-0 sudo[278189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:12 compute-0 sudo[278189]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:12 compute-0 sudo[278214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:12 compute-0 sudo[278214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:12 compute-0 sudo[278214]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:13 compute-0 sudo[278239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:14:13 compute-0 sudo[278239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.337507121 +0000 UTC m=+0.041212960 container create e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:14:13 compute-0 systemd[1]: Started libpod-conmon-e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7.scope.
Jan 27 09:14:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.411731093 +0000 UTC m=+0.115436932 container init e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.3184753 +0000 UTC m=+0.022181159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.421558733 +0000 UTC m=+0.125264572 container start e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 09:14:13 compute-0 nova_compute[247671]: 2026-01-27 09:14:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:13 compute-0 admiring_panini[278320]: 167 167
Jan 27 09:14:13 compute-0 systemd[1]: libpod-e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7.scope: Deactivated successfully.
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.42512519 +0000 UTC m=+0.128831029 container attach e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.426006394 +0000 UTC m=+0.129712263 container died e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2ea0bd31e60889a05dcebee88a9576d68b63ad09f036cf2ef202897a526424-merged.mount: Deactivated successfully.
Jan 27 09:14:13 compute-0 podman[278304]: 2026-01-27 09:14:13.4591039 +0000 UTC m=+0.162809739 container remove e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:14:13 compute-0 systemd[1]: libpod-conmon-e43393061532b1c153770a1c069718934108c2d51956d80fd35f0407dfba16c7.scope: Deactivated successfully.
Jan 27 09:14:13 compute-0 podman[278345]: 2026-01-27 09:14:13.626645368 +0000 UTC m=+0.048285463 container create 90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:13 compute-0 systemd[1]: Started libpod-conmon-90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5.scope.
Jan 27 09:14:13 compute-0 podman[278345]: 2026-01-27 09:14:13.606917718 +0000 UTC m=+0.028557863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:14:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923641d952af849780017d5a5bb0cf415a1e947d57677b82b55e672652c6b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923641d952af849780017d5a5bb0cf415a1e947d57677b82b55e672652c6b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923641d952af849780017d5a5bb0cf415a1e947d57677b82b55e672652c6b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923641d952af849780017d5a5bb0cf415a1e947d57677b82b55e672652c6b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:13 compute-0 podman[278345]: 2026-01-27 09:14:13.72059561 +0000 UTC m=+0.142235705 container init 90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:14:13 compute-0 podman[278345]: 2026-01-27 09:14:13.751526828 +0000 UTC m=+0.173166903 container start 90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 09:14:13 compute-0 podman[278345]: 2026-01-27 09:14:13.754934001 +0000 UTC m=+0.176574076 container attach 90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:14:14 compute-0 nova_compute[247671]: 2026-01-27 09:14:14.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]: {
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:     "0": [
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:         {
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "devices": [
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "/dev/loop3"
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             ],
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "lv_name": "ceph_lv0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "lv_size": "7511998464",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "name": "ceph_lv0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "tags": {
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.cluster_name": "ceph",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.crush_device_class": "",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.encrypted": "0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.osd_id": "0",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.type": "block",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:                 "ceph.vdo": "0"
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             },
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "type": "block",
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:             "vg_name": "ceph_vg0"
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:         }
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]:     ]
Jan 27 09:14:14 compute-0 sharp_dhawan[278361]: }
Jan 27 09:14:14 compute-0 systemd[1]: libpod-90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5.scope: Deactivated successfully.
Jan 27 09:14:14 compute-0 podman[278345]: 2026-01-27 09:14:14.509839212 +0000 UTC m=+0.931479287 container died 90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 27 09:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-46923641d952af849780017d5a5bb0cf415a1e947d57677b82b55e672652c6b4-merged.mount: Deactivated successfully.
Jan 27 09:14:14 compute-0 podman[278345]: 2026-01-27 09:14:14.561972 +0000 UTC m=+0.983612075 container remove 90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:14:14 compute-0 systemd[1]: libpod-conmon-90ffda12ebe1b3665b57f281f04a5cbf72fd4d999da95ce5b73ec68c96b7f7e5.scope: Deactivated successfully.
Jan 27 09:14:14 compute-0 sudo[278239]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:14 compute-0 sudo[278383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:14 compute-0 sudo[278383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:14 compute-0 sudo[278383]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:14.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:14 compute-0 sudo[278408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:14:14 compute-0 sudo[278408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:14 compute-0 sudo[278408]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:14.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:14 compute-0 ceph-mon[74357]: pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:14:14 compute-0 sudo[278433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:14 compute-0 sudo[278433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:14 compute-0 sudo[278433]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:14 compute-0 sudo[278458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:14:14 compute-0 sudo[278458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:14:15
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'backups', 'vms', 'default.rgw.control']
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.168926369 +0000 UTC m=+0.036404448 container create ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_murdock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:14:15 compute-0 systemd[1]: Started libpod-conmon-ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d.scope.
Jan 27 09:14:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.242773871 +0000 UTC m=+0.110251980 container init ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_murdock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.249612178 +0000 UTC m=+0.117090257 container start ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_murdock, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.154522064 +0000 UTC m=+0.022000163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.252470226 +0000 UTC m=+0.119948305 container attach ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_murdock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:14:15 compute-0 sleepy_murdock[278540]: 167 167
Jan 27 09:14:15 compute-0 systemd[1]: libpod-ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d.scope: Deactivated successfully.
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.254226815 +0000 UTC m=+0.121704894 container died ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-37fde077e5a5b5b9c52b19b426e80fbae1be16e67bdeecf4aebdbe9cca669a23-merged.mount: Deactivated successfully.
Jan 27 09:14:15 compute-0 podman[278524]: 2026-01-27 09:14:15.284192716 +0000 UTC m=+0.151670805 container remove ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_murdock, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:14:15 compute-0 systemd[1]: libpod-conmon-ade523ffa88b7e1c2813778ed73f7e56f8e5e3e4af9e9b3211d89b60422bd22d.scope: Deactivated successfully.
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:14:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:14:15 compute-0 podman[278564]: 2026-01-27 09:14:15.434346547 +0000 UTC m=+0.036064899 container create 669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:14:15 compute-0 systemd[1]: Started libpod-conmon-669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be.scope.
Jan 27 09:14:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a726916815b29b53cfdf1f29c21316f44cb7324eb541c2761e904eb0afae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a726916815b29b53cfdf1f29c21316f44cb7324eb541c2761e904eb0afae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a726916815b29b53cfdf1f29c21316f44cb7324eb541c2761e904eb0afae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a726916815b29b53cfdf1f29c21316f44cb7324eb541c2761e904eb0afae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:15 compute-0 podman[278564]: 2026-01-27 09:14:15.507071648 +0000 UTC m=+0.108790030 container init 669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:14:15 compute-0 podman[278564]: 2026-01-27 09:14:15.418573905 +0000 UTC m=+0.020292277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:14:15 compute-0 podman[278564]: 2026-01-27 09:14:15.515614372 +0000 UTC m=+0.117332724 container start 669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:14:15 compute-0 podman[278564]: 2026-01-27 09:14:15.518793099 +0000 UTC m=+0.120511461 container attach 669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 27 09:14:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 27 09:14:16 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 27 09:14:16 compute-0 musing_fermi[278580]: {
Jan 27 09:14:16 compute-0 musing_fermi[278580]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:14:16 compute-0 musing_fermi[278580]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:14:16 compute-0 musing_fermi[278580]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:14:16 compute-0 musing_fermi[278580]:         "osd_id": 0,
Jan 27 09:14:16 compute-0 musing_fermi[278580]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:14:16 compute-0 musing_fermi[278580]:         "type": "bluestore"
Jan 27 09:14:16 compute-0 musing_fermi[278580]:     }
Jan 27 09:14:16 compute-0 musing_fermi[278580]: }
Jan 27 09:14:16 compute-0 systemd[1]: libpod-669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be.scope: Deactivated successfully.
Jan 27 09:14:16 compute-0 podman[278564]: 2026-01-27 09:14:16.322190488 +0000 UTC m=+0.923908840 container died 669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a8a726916815b29b53cfdf1f29c21316f44cb7324eb541c2761e904eb0afae0-merged.mount: Deactivated successfully.
Jan 27 09:14:16 compute-0 podman[278564]: 2026-01-27 09:14:16.376686031 +0000 UTC m=+0.978404383 container remove 669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 27 09:14:16 compute-0 systemd[1]: libpod-conmon-669f0ab72e50890ed7500edc4e3a2f641723e4138c7e2255d96e2824d148d4be.scope: Deactivated successfully.
Jan 27 09:14:16 compute-0 sudo[278458]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:14:16 compute-0 podman[278601]: 2026-01-27 09:14:16.415260046 +0000 UTC m=+0.060918908 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 09:14:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:14:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:16 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0915bcdf-a212-4d8c-8f6c-fbde1bcadc5f does not exist
Jan 27 09:14:16 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 904b9038-ace4-4dbe-a2a8-062b8f47d290 does not exist
Jan 27 09:14:16 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e8563bc4-8366-4d8e-a42e-5606689347e4 does not exist
Jan 27 09:14:16 compute-0 sudo[278631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:16 compute-0 sudo[278631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:16 compute-0 sudo[278631]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:16 compute-0 sudo[278656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:14:16 compute-0 sudo[278656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:16 compute-0 sudo[278656]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:16.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:16.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:16 compute-0 ceph-mon[74357]: pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 76 op/s
Jan 27 09:14:16 compute-0 ceph-mon[74357]: osdmap e169: 3 total, 3 up, 3 in
Jan 27 09:14:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:16 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:14:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.435 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.435 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.435 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.457 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.457 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:17 compute-0 nova_compute[247671]: 2026-01-27 09:14:17.458 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 09:14:18 compute-0 nova_compute[247671]: 2026-01-27 09:14:18.471 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:18.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:18.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:18 compute-0 ceph-mon[74357]: pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 09:14:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.450 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.450 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.450 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.450 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.451 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:14:19 compute-0 ceph-mon[74357]: pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 09:14:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:14:19 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238029623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:19 compute-0 nova_compute[247671]: 2026-01-27 09:14:19.886 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.335 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.337 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5067MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.337 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.337 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.426 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.426 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.426 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:14:20 compute-0 nova_compute[247671]: 2026-01-27 09:14:20.566 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:14:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:20.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:20.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 2 op/s
Jan 27 09:14:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3238029623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:14:20 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753124174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:21 compute-0 nova_compute[247671]: 2026-01-27 09:14:21.012 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:14:21 compute-0 nova_compute[247671]: 2026-01-27 09:14:21.016 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:14:21 compute-0 nova_compute[247671]: 2026-01-27 09:14:21.033 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:14:21 compute-0 nova_compute[247671]: 2026-01-27 09:14:21.035 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:14:21 compute-0 nova_compute[247671]: 2026-01-27 09:14:21.035 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:14:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:21 compute-0 ceph-mon[74357]: pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 2 op/s
Jan 27 09:14:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2753124174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1223237027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2425593037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:21 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3188920394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:22.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:22.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:14:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2973846024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:14:23 compute-0 nova_compute[247671]: 2026-01-27 09:14:23.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:23 compute-0 ceph-mon[74357]: pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:14:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:14:25 compute-0 ceph-mon[74357]: pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:14:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:26 compute-0 nova_compute[247671]: 2026-01-27 09:14:26.440 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:14:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:14:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 473 B/s rd, 379 B/s wr, 1 op/s
Jan 27 09:14:27 compute-0 nova_compute[247671]: 2026-01-27 09:14:27.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:14:27 compute-0 ceph-mon[74357]: pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 473 B/s rd, 379 B/s wr, 1 op/s
Jan 27 09:14:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:28.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:28.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 09:14:29 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:14:29.056 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:14:29 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:14:29.057 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:14:30 compute-0 ceph-mon[74357]: pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 27 09:14:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:30.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:30.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 341 B/s wr, 6 op/s
Jan 27 09:14:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2839067339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:14:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2839067339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:14:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:32 compute-0 ceph-mon[74357]: pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 341 B/s wr, 6 op/s
Jan 27 09:14:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:32.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:32 compute-0 sudo[278734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:32 compute-0 sudo[278734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:32 compute-0 sudo[278734]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 597 B/s wr, 9 op/s
Jan 27 09:14:32 compute-0 sudo[278760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:32 compute-0 sudo[278760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:32 compute-0 sudo[278760]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:32 compute-0 podman[278758]: 2026-01-27 09:14:32.922949293 +0000 UTC m=+0.082231583 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:14:34 compute-0 ceph-mon[74357]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 597 B/s wr, 9 op/s
Jan 27 09:14:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:34.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:14:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:36 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:14:36.059 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:14:36 compute-0 ceph-mon[74357]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:14:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:36.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:36.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:14:38 compute-0 ceph-mon[74357]: pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Jan 27 09:14:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:38.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:14:40 compute-0 ceph-mon[74357]: pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 27 09:14:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:40.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 20 op/s
Jan 27 09:14:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:42 compute-0 ceph-mon[74357]: pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 20 op/s
Jan 27 09:14:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 14 op/s
Jan 27 09:14:44 compute-0 ceph-mon[74357]: pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 14 op/s
Jan 27 09:14:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:44.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 54 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 569 KiB/s wr, 12 op/s
Jan 27 09:14:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:46 compute-0 ceph-mon[74357]: pgmap v1555: 305 pgs: 305 active+clean; 54 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 569 KiB/s wr, 12 op/s
Jan 27 09:14:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:46.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:14:47 compute-0 podman[278817]: 2026-01-27 09:14:47.236160899 +0000 UTC m=+0.051552092 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 09:14:48 compute-0 ceph-mon[74357]: pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:14:48 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2562824696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 27 09:14:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:48.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:14:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 27 09:14:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 27 09:14:49 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 27 09:14:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 27 09:14:50 compute-0 ceph-mon[74357]: pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:14:50 compute-0 ceph-mon[74357]: osdmap e170: 3 total, 3 up, 3 in
Jan 27 09:14:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 27 09:14:50 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 27 09:14:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:50.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 27 09:14:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 27 09:14:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 27 09:14:51 compute-0 ceph-mon[74357]: osdmap e171: 3 total, 3 up, 3 in
Jan 27 09:14:51 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 27 09:14:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 27 09:14:52 compute-0 ceph-mon[74357]: pgmap v1560: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 27 09:14:52 compute-0 ceph-mon[74357]: osdmap e172: 3 total, 3 up, 3 in
Jan 27 09:14:52 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 27 09:14:52 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 27 09:14:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.5 KiB/s wr, 11 op/s
Jan 27 09:14:52 compute-0 sudo[278839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:52 compute-0 sudo[278839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:52 compute-0 sudo[278839]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:53 compute-0 sudo[278864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:14:53 compute-0 sudo[278864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:14:53 compute-0 sudo[278864]: pam_unix(sudo:session): session closed for user root
Jan 27 09:14:53 compute-0 ceph-mon[74357]: osdmap e173: 3 total, 3 up, 3 in
Jan 27 09:14:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:14:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:14:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:14:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:14:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:14:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:14:54 compute-0 ceph-mon[74357]: pgmap v1563: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.5 KiB/s wr, 11 op/s
Jan 27 09:14:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:54.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:14:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:54.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:14:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 108 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 12 op/s
Jan 27 09:14:55 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4200304722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 27 09:14:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 27 09:14:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 27 09:14:56 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 27 09:14:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:56.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:56.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:56 compute-0 ceph-mon[74357]: pgmap v1564: 305 pgs: 305 active+clean; 108 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 12 op/s
Jan 27 09:14:56 compute-0 ceph-mon[74357]: osdmap e174: 3 total, 3 up, 3 in
Jan 27 09:14:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 147 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 98 op/s
Jan 27 09:14:58 compute-0 ceph-mon[74357]: pgmap v1566: 305 pgs: 305 active+clean; 147 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 98 op/s
Jan 27 09:14:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:14:58.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:14:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:14:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:14:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:14:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 147 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 78 op/s
Jan 27 09:15:00 compute-0 ceph-mon[74357]: pgmap v1567: 305 pgs: 305 active+clean; 147 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 78 op/s
Jan 27 09:15:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/224503555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:15:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/224503555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:15:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:00.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.0 MiB/s wr, 88 op/s
Jan 27 09:15:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:01 compute-0 anacron[29966]: Job `cron.monthly' started
Jan 27 09:15:01 compute-0 anacron[29966]: Job `cron.monthly' terminated
Jan 27 09:15:01 compute-0 anacron[29966]: Normal exit (3 jobs run)
Jan 27 09:15:02 compute-0 ceph-mon[74357]: pgmap v1568: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.0 MiB/s wr, 88 op/s
Jan 27 09:15:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:02.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:02.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 74 op/s
Jan 27 09:15:03 compute-0 podman[278896]: 2026-01-27 09:15:03.262507323 +0000 UTC m=+0.075802356 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:15:04 compute-0 ceph-mon[74357]: pgmap v1569: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 74 op/s
Jan 27 09:15:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:04.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 72 op/s
Jan 27 09:15:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:06 compute-0 ceph-mon[74357]: pgmap v1570: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 72 op/s
Jan 27 09:15:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:06.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 14 op/s
Jan 27 09:15:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 27 09:15:08 compute-0 ceph-mon[74357]: pgmap v1571: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 14 op/s
Jan 27 09:15:08 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 27 09:15:08 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 27 09:15:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:08.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:08.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 15 op/s
Jan 27 09:15:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 27 09:15:09 compute-0 ceph-mon[74357]: osdmap e175: 3 total, 3 up, 3 in
Jan 27 09:15:09 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4241099658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:15:09 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/4241099658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:15:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 27 09:15:09 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 27 09:15:10 compute-0 ceph-mon[74357]: pgmap v1573: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 15 op/s
Jan 27 09:15:10 compute-0 ceph-mon[74357]: osdmap e176: 3 total, 3 up, 3 in
Jan 27 09:15:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3659325863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:15:10 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3659325863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:15:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:10.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:10 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:15:10.837 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:15:10 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:15:10.838 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:15:10 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:15:10.838 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:15:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 54 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:12 compute-0 nova_compute[247671]: 2026-01-27 09:15:12.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:12 compute-0 ceph-mon[74357]: pgmap v1575: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 54 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 54 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:13 compute-0 sudo[278928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:13 compute-0 sudo[278928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:13 compute-0 sudo[278928]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:13 compute-0 sudo[278953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:13 compute-0 sudo[278953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:13 compute-0 sudo[278953]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:13 compute-0 nova_compute[247671]: 2026-01-27 09:15:13.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:13 compute-0 nova_compute[247671]: 2026-01-27 09:15:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:13 compute-0 nova_compute[247671]: 2026-01-27 09:15:13.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:15:14 compute-0 nova_compute[247671]: 2026-01-27 09:15:14.419 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:14.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:14 compute-0 ceph-mon[74357]: pgmap v1576: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 54 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:14.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:15:15
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'default.rgw.meta', 'images', 'backups', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:15:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:15:15 compute-0 nova_compute[247671]: 2026-01-27 09:15:15.417 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 27 09:15:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 27 09:15:16 compute-0 ceph-mon[74357]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 27 09:15:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:16.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:16 compute-0 ceph-mon[74357]: pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:16 compute-0 ceph-mon[74357]: osdmap e177: 3 total, 3 up, 3 in
Jan 27 09:15:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:16 compute-0 sudo[278980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:16 compute-0 sudo[278980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:16 compute-0 sudo[278980]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:16 compute-0 sudo[279005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:15:16 compute-0 sudo[279005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:16 compute-0 sudo[279005]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:16 compute-0 sudo[279030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:16 compute-0 sudo[279030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:16 compute-0 sudo[279030]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:17 compute-0 sudo[279055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:15:17 compute-0 sudo[279055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:17 compute-0 sudo[279055]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:15:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:15:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:15:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:15:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 77a8307a-2b86-4732-a1aa-373cfdb68d83 does not exist
Jan 27 09:15:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 12376302-da76-4e9e-870b-4b4a18977539 does not exist
Jan 27 09:15:17 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 325207ce-1840-479b-8e64-8e556fa20582 does not exist
Jan 27 09:15:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:15:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:15:17 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:15:17 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:15:17 compute-0 sudo[279112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:17 compute-0 sudo[279112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:17 compute-0 sudo[279112]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:15:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:15:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:15:17 compute-0 sudo[279138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:15:17 compute-0 sudo[279138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:17 compute-0 sudo[279138]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:17 compute-0 podman[279136]: 2026-01-27 09:15:17.883092005 +0000 UTC m=+0.079618401 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:15:17 compute-0 sudo[279181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:17 compute-0 sudo[279181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:17 compute-0 sudo[279181]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:17 compute-0 sudo[279206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:15:17 compute-0 sudo[279206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.268603382 +0000 UTC m=+0.045856017 container create bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 27 09:15:18 compute-0 systemd[1]: Started libpod-conmon-bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a.scope.
Jan 27 09:15:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.247334529 +0000 UTC m=+0.024587184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.348685575 +0000 UTC m=+0.125938220 container init bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.35726804 +0000 UTC m=+0.134520675 container start bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.360836587 +0000 UTC m=+0.138089242 container attach bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 09:15:18 compute-0 sleepy_raman[279288]: 167 167
Jan 27 09:15:18 compute-0 systemd[1]: libpod-bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a.scope: Deactivated successfully.
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.363246903 +0000 UTC m=+0.140499538 container died bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0e3f5f6712e639ea073254bc85479a79dd3160ce679859020d32cdf4f52c59f-merged.mount: Deactivated successfully.
Jan 27 09:15:18 compute-0 podman[279271]: 2026-01-27 09:15:18.405345786 +0000 UTC m=+0.182598421 container remove bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 09:15:18 compute-0 systemd[1]: libpod-conmon-bd9603b985e61a26d9e3af917c899ef56539879dc89e7c30603a84de1b03281a.scope: Deactivated successfully.
Jan 27 09:15:18 compute-0 nova_compute[247671]: 2026-01-27 09:15:18.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:18 compute-0 nova_compute[247671]: 2026-01-27 09:15:18.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:15:18 compute-0 nova_compute[247671]: 2026-01-27 09:15:18.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:15:18 compute-0 nova_compute[247671]: 2026-01-27 09:15:18.437 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:15:18 compute-0 podman[279313]: 2026-01-27 09:15:18.583507724 +0000 UTC m=+0.046651678 container create 59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 09:15:18 compute-0 systemd[1]: Started libpod-conmon-59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027.scope.
Jan 27 09:15:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99c90b578ca220d785cbd642387cfb4589c7eb7d9075a00b8d2edebe55ba55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99c90b578ca220d785cbd642387cfb4589c7eb7d9075a00b8d2edebe55ba55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99c90b578ca220d785cbd642387cfb4589c7eb7d9075a00b8d2edebe55ba55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99c90b578ca220d785cbd642387cfb4589c7eb7d9075a00b8d2edebe55ba55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99c90b578ca220d785cbd642387cfb4589c7eb7d9075a00b8d2edebe55ba55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:18 compute-0 podman[279313]: 2026-01-27 09:15:18.564124413 +0000 UTC m=+0.027268387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:15:18 compute-0 podman[279313]: 2026-01-27 09:15:18.666846116 +0000 UTC m=+0.129990100 container init 59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:15:18 compute-0 podman[279313]: 2026-01-27 09:15:18.673172929 +0000 UTC m=+0.136316913 container start 59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:15:18 compute-0 podman[279313]: 2026-01-27 09:15:18.676550612 +0000 UTC m=+0.139694596 container attach 59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 09:15:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:18 compute-0 ceph-mon[74357]: pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 27 09:15:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.9 KiB/s wr, 81 op/s
Jan 27 09:15:19 compute-0 tender_mendel[279329]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:15:19 compute-0 tender_mendel[279329]: --> relative data size: 1.0
Jan 27 09:15:19 compute-0 tender_mendel[279329]: --> All data devices are unavailable
Jan 27 09:15:19 compute-0 systemd[1]: libpod-59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027.scope: Deactivated successfully.
Jan 27 09:15:19 compute-0 podman[279313]: 2026-01-27 09:15:19.568606088 +0000 UTC m=+1.031750082 container died 59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a99c90b578ca220d785cbd642387cfb4589c7eb7d9075a00b8d2edebe55ba55-merged.mount: Deactivated successfully.
Jan 27 09:15:19 compute-0 podman[279313]: 2026-01-27 09:15:19.626544645 +0000 UTC m=+1.089688599 container remove 59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 09:15:19 compute-0 systemd[1]: libpod-conmon-59fd0d29d1356314dddb681583042472cd1d93721c3bae1bfb2e5d717b35c027.scope: Deactivated successfully.
Jan 27 09:15:19 compute-0 sudo[279206]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:19 compute-0 sudo[279357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:19 compute-0 sudo[279357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:19 compute-0 sudo[279357]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:19 compute-0 sudo[279382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:15:19 compute-0 sudo[279382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:19 compute-0 sudo[279382]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:19 compute-0 sudo[279407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:19 compute-0 sudo[279407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:19 compute-0 sudo[279407]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:19 compute-0 sudo[279432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:15:19 compute-0 sudo[279432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:19 compute-0 ceph-mon[74357]: pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.9 KiB/s wr, 81 op/s
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.225613809 +0000 UTC m=+0.039291658 container create 11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_heyrovsky, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:15:20 compute-0 systemd[1]: Started libpod-conmon-11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348.scope.
Jan 27 09:15:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.303710087 +0000 UTC m=+0.117387946 container init 11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_heyrovsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.209565909 +0000 UTC m=+0.023243778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.311087339 +0000 UTC m=+0.124765188 container start 11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_heyrovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:15:20 compute-0 determined_heyrovsky[279513]: 167 167
Jan 27 09:15:20 compute-0 systemd[1]: libpod-11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348.scope: Deactivated successfully.
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.317431422 +0000 UTC m=+0.131109301 container attach 11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.32095847 +0000 UTC m=+0.134636339 container died 11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c46ac447cbd2226e37b05166c9c35405c489426596212a7bd349d4d2da8bbf3-merged.mount: Deactivated successfully.
Jan 27 09:15:20 compute-0 podman[279497]: 2026-01-27 09:15:20.354584261 +0000 UTC m=+0.168262100 container remove 11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:20 compute-0 systemd[1]: libpod-conmon-11888c7f484e6073c818d8fb1765b14f053634c242c081a147ae68043f483348.scope: Deactivated successfully.
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.425 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.449 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.449 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.450 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.450 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:15:20 compute-0 podman[279538]: 2026-01-27 09:15:20.513633865 +0000 UTC m=+0.041286521 container create 1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 27 09:15:20 compute-0 systemd[1]: Started libpod-conmon-1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3.scope.
Jan 27 09:15:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6167b5944d5fbfd5a08efaf4a17e5998bc3fd9b833f735ac3d5a45994f3735ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6167b5944d5fbfd5a08efaf4a17e5998bc3fd9b833f735ac3d5a45994f3735ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6167b5944d5fbfd5a08efaf4a17e5998bc3fd9b833f735ac3d5a45994f3735ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6167b5944d5fbfd5a08efaf4a17e5998bc3fd9b833f735ac3d5a45994f3735ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:20 compute-0 podman[279538]: 2026-01-27 09:15:20.496981639 +0000 UTC m=+0.024634325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:15:20 compute-0 podman[279538]: 2026-01-27 09:15:20.596402862 +0000 UTC m=+0.124055548 container init 1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:15:20 compute-0 podman[279538]: 2026-01-27 09:15:20.602627522 +0000 UTC m=+0.130280178 container start 1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bell, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 09:15:20 compute-0 podman[279538]: 2026-01-27 09:15:20.605404938 +0000 UTC m=+0.133057594 container attach 1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:20.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 27 09:15:20 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:15:20 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038544155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:20 compute-0 nova_compute[247671]: 2026-01-27 09:15:20.945 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:15:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1038544155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.083 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.084 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5087MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.084 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.084 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:15:21 compute-0 fervent_bell[279556]: {
Jan 27 09:15:21 compute-0 fervent_bell[279556]:     "0": [
Jan 27 09:15:21 compute-0 fervent_bell[279556]:         {
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "devices": [
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "/dev/loop3"
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             ],
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "lv_name": "ceph_lv0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "lv_size": "7511998464",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "name": "ceph_lv0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "tags": {
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.cluster_name": "ceph",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.crush_device_class": "",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.encrypted": "0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.osd_id": "0",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.type": "block",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:                 "ceph.vdo": "0"
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             },
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "type": "block",
Jan 27 09:15:21 compute-0 fervent_bell[279556]:             "vg_name": "ceph_vg0"
Jan 27 09:15:21 compute-0 fervent_bell[279556]:         }
Jan 27 09:15:21 compute-0 fervent_bell[279556]:     ]
Jan 27 09:15:21 compute-0 fervent_bell[279556]: }
Jan 27 09:15:21 compute-0 systemd[1]: libpod-1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3.scope: Deactivated successfully.
Jan 27 09:15:21 compute-0 podman[279538]: 2026-01-27 09:15:21.394646119 +0000 UTC m=+0.922298775 container died 1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6167b5944d5fbfd5a08efaf4a17e5998bc3fd9b833f735ac3d5a45994f3735ef-merged.mount: Deactivated successfully.
Jan 27 09:15:21 compute-0 podman[279538]: 2026-01-27 09:15:21.452132203 +0000 UTC m=+0.979784869 container remove 1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bell, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:15:21 compute-0 systemd[1]: libpod-conmon-1f82ac9894f1d90002c69b94104fadaf0bc937b67bc12deb22e63e40761384f3.scope: Deactivated successfully.
Jan 27 09:15:21 compute-0 sudo[279432]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:21 compute-0 sudo[279595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:21 compute-0 sudo[279595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:21 compute-0 sudo[279595]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:21 compute-0 sudo[279620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:15:21 compute-0 sudo[279620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:21 compute-0 sudo[279620]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.638 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.639 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.639 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:15:21 compute-0 sudo[279645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:21 compute-0 sudo[279645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:21 compute-0 sudo[279645]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:21 compute-0 sudo[279670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:15:21 compute-0 sudo[279670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.743 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing inventories for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.886 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating ProviderTree inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.886 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.904 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing aggregate associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.932 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing trait associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 09:15:21 compute-0 nova_compute[247671]: 2026-01-27 09:15:21.986 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:15:22 compute-0 ceph-mon[74357]: pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 27 09:15:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3654213924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.04179411 +0000 UTC m=+0.040134961 container create f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:15:22 compute-0 systemd[1]: Started libpod-conmon-f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d.scope.
Jan 27 09:15:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.109433422 +0000 UTC m=+0.107774273 container init f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.117258776 +0000 UTC m=+0.115599627 container start f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.120153236 +0000 UTC m=+0.118494087 container attach f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.024964319 +0000 UTC m=+0.023305190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:15:22 compute-0 tender_chaum[279751]: 167 167
Jan 27 09:15:22 compute-0 systemd[1]: libpod-f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d.scope: Deactivated successfully.
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.124219446 +0000 UTC m=+0.122560297 container died f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9caddbbfa0a3913b1cdfaf7604c0ded61f49850fc5d53b5ac3148ffd593907e2-merged.mount: Deactivated successfully.
Jan 27 09:15:22 compute-0 podman[279735]: 2026-01-27 09:15:22.159728939 +0000 UTC m=+0.158069790 container remove f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chaum, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:15:22 compute-0 systemd[1]: libpod-conmon-f2e37c090e0fde9c263a1d7ed3853dfa336d1f6ee7521c70f29ff8bf9ea6656d.scope: Deactivated successfully.
Jan 27 09:15:22 compute-0 podman[279795]: 2026-01-27 09:15:22.31643417 +0000 UTC m=+0.042255518 container create 002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 09:15:22 compute-0 systemd[1]: Started libpod-conmon-002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b.scope.
Jan 27 09:15:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a6e6a321ee36f222c31cb9c5c6574e25503733a3fc3d593b7ac80d85d342a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a6e6a321ee36f222c31cb9c5c6574e25503733a3fc3d593b7ac80d85d342a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a6e6a321ee36f222c31cb9c5c6574e25503733a3fc3d593b7ac80d85d342a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a6e6a321ee36f222c31cb9c5c6574e25503733a3fc3d593b7ac80d85d342a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:22 compute-0 podman[279795]: 2026-01-27 09:15:22.376394602 +0000 UTC m=+0.102215960 container init 002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:15:22 compute-0 podman[279795]: 2026-01-27 09:15:22.389498691 +0000 UTC m=+0.115320029 container start 002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 27 09:15:22 compute-0 podman[279795]: 2026-01-27 09:15:22.392469502 +0000 UTC m=+0.118290850 container attach 002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:15:22 compute-0 podman[279795]: 2026-01-27 09:15:22.298197 +0000 UTC m=+0.024018358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:15:22 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:15:22 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760457491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:22 compute-0 nova_compute[247671]: 2026-01-27 09:15:22.508 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:15:22 compute-0 nova_compute[247671]: 2026-01-27 09:15:22.516 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:15:22 compute-0 nova_compute[247671]: 2026-01-27 09:15:22.535 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:15:22 compute-0 nova_compute[247671]: 2026-01-27 09:15:22.537 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:15:22 compute-0 nova_compute[247671]: 2026-01-27 09:15:22.537 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:15:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 27 09:15:23 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2734576220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:23 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1760457491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]: {
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:         "osd_id": 0,
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:         "type": "bluestore"
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]:     }
Jan 27 09:15:23 compute-0 nervous_ishizaka[279811]: }
Jan 27 09:15:23 compute-0 systemd[1]: libpod-002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b.scope: Deactivated successfully.
Jan 27 09:15:23 compute-0 podman[279795]: 2026-01-27 09:15:23.256311575 +0000 UTC m=+0.982132923 container died 002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a6e6a321ee36f222c31cb9c5c6574e25503733a3fc3d593b7ac80d85d342a3-merged.mount: Deactivated successfully.
Jan 27 09:15:23 compute-0 podman[279795]: 2026-01-27 09:15:23.301809611 +0000 UTC m=+1.027630939 container remove 002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:15:23 compute-0 systemd[1]: libpod-conmon-002c03f13e63ca13adf3bd9cf1139e6c0c27c8b26be0f2c0c0271ec955b50f6b.scope: Deactivated successfully.
Jan 27 09:15:23 compute-0 sudo[279670]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:15:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:15:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:15:23 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:15:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev cf44b5f9-d996-436a-b362-aec08aab8957 does not exist
Jan 27 09:15:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 33bd53d8-3a50-453f-9358-3ff31a7b4fa4 does not exist
Jan 27 09:15:23 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 3c2233bc-e292-4d2f-8589-803912ced563 does not exist
Jan 27 09:15:23 compute-0 sudo[279850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:23 compute-0 sudo[279850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:23 compute-0 sudo[279850]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:23 compute-0 sudo[279875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:15:23 compute-0 sudo[279875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:23 compute-0 sudo[279875]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:24 compute-0 ceph-mon[74357]: pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 27 09:15:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:15:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:15:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2132039992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:15:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1310949158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:15:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:26 compute-0 ceph-mon[74357]: pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:26.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:28 compute-0 ceph-mon[74357]: pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:28 compute-0 nova_compute[247671]: 2026-01-27 09:15:28.536 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:28 compute-0 nova_compute[247671]: 2026-01-27 09:15:28.536 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:15:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:28.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:30 compute-0 ceph-mon[74357]: pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:32 compute-0 ceph-mon[74357]: pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:33 compute-0 sudo[279905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:33 compute-0 sudo[279905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:33 compute-0 sudo[279905]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:33 compute-0 sudo[279931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:33 compute-0 sudo[279931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:33 compute-0 sudo[279931]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:33 compute-0 podman[279929]: 2026-01-27 09:15:33.410173948 +0000 UTC m=+0.081591465 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:15:34 compute-0 ceph-mon[74357]: pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:34.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:36 compute-0 ceph-mon[74357]: pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:15:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:15:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:38 compute-0 ceph-mon[74357]: pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:38.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:40 compute-0 ceph-mon[74357]: pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:40.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:42 compute-0 ceph-mon[74357]: pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:43 compute-0 ceph-mon[74357]: pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:44.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:45 compute-0 ceph-mon[74357]: pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:46.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:47 compute-0 ceph-mon[74357]: pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:48 compute-0 podman[279990]: 2026-01-27 09:15:48.228399935 +0000 UTC m=+0.047105721 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:15:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:48.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:48.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:49 compute-0 ceph-mon[74357]: pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:50.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:52 compute-0 ceph-mon[74357]: pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:52.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:52.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:53 compute-0 sudo[280012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:53 compute-0 sudo[280012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:53 compute-0 sudo[280012]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:53 compute-0 sudo[280037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:15:53 compute-0 sudo[280037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:15:53 compute-0 sudo[280037]: pam_unix(sudo:session): session closed for user root
Jan 27 09:15:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:15:54.255 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:15:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:15:54.256 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:15:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:15:54.256 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:15:54 compute-0 ceph-mon[74357]: pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:54.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:54.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:56 compute-0 ceph-mon[74357]: pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:58 compute-0 ceph-mon[74357]: pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:15:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:15:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:15:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:15:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:15:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:15:58.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:15:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:15:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/892596896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:15:59 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/892596896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:16:00 compute-0 ceph-mon[74357]: pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:02 compute-0 ceph-mon[74357]: pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 09:16:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:02.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 09:16:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:02.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:04 compute-0 podman[280067]: 2026-01-27 09:16:04.254686067 +0000 UTC m=+0.074046289 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:16:04 compute-0 ceph-mon[74357]: pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:04.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:04.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:06 compute-0 ceph-mon[74357]: pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:06.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:07 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:16:07.432 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:16:07 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:16:07.433 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:16:07 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:16:07.433 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:16:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:08 compute-0 ceph-mon[74357]: pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:08.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:09 compute-0 ceph-mon[74357]: pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:10.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:12 compute-0 ceph-mon[74357]: pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:12.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:12.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:13 compute-0 nova_compute[247671]: 2026-01-27 09:16:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:13 compute-0 nova_compute[247671]: 2026-01-27 09:16:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:13 compute-0 sudo[280098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:13 compute-0 sudo[280098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:13 compute-0 sudo[280098]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:13 compute-0 sudo[280123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:13 compute-0 sudo[280123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:13 compute-0 sudo[280123]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:14 compute-0 ceph-mon[74357]: pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:14 compute-0 nova_compute[247671]: 2026-01-27 09:16:14.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:14 compute-0 nova_compute[247671]: 2026-01-27 09:16:14.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:16:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:14.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:16:15
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:16:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:16:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:16 compute-0 ceph-mon[74357]: pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:16 compute-0 nova_compute[247671]: 2026-01-27 09:16:16.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:16.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:18 compute-0 ceph-mon[74357]: pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:18 compute-0 nova_compute[247671]: 2026-01-27 09:16:18.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:18 compute-0 nova_compute[247671]: 2026-01-27 09:16:18.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:16:18 compute-0 nova_compute[247671]: 2026-01-27 09:16:18.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:16:18 compute-0 nova_compute[247671]: 2026-01-27 09:16:18.445 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:16:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:18.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:18.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:19 compute-0 podman[280151]: 2026-01-27 09:16:19.246702401 +0000 UTC m=+0.057667221 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 09:16:20 compute-0 ceph-mon[74357]: pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:20 compute-0 nova_compute[247671]: 2026-01-27 09:16:20.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:22 compute-0 nova_compute[247671]: 2026-01-27 09:16:22.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:22 compute-0 nova_compute[247671]: 2026-01-27 09:16:22.450 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:16:22 compute-0 nova_compute[247671]: 2026-01-27 09:16:22.451 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:16:22 compute-0 nova_compute[247671]: 2026-01-27 09:16:22.451 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:16:22 compute-0 nova_compute[247671]: 2026-01-27 09:16:22.451 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:16:22 compute-0 nova_compute[247671]: 2026-01-27 09:16:22.452 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:16:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:22.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:16:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368109675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:23 compute-0 ceph-mon[74357]: pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:23 compute-0 nova_compute[247671]: 2026-01-27 09:16:23.682 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:16:23 compute-0 sudo[280195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:23 compute-0 sudo[280195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:23 compute-0 sudo[280195]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:23 compute-0 nova_compute[247671]: 2026-01-27 09:16:23.859 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:16:23 compute-0 nova_compute[247671]: 2026-01-27 09:16:23.860 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:16:23 compute-0 nova_compute[247671]: 2026-01-27 09:16:23.861 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:16:23 compute-0 nova_compute[247671]: 2026-01-27 09:16:23.861 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:16:23 compute-0 sudo[280220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:16:23 compute-0 sudo[280220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:23 compute-0 sudo[280220]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:23 compute-0 sudo[280245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:23 compute-0 sudo[280245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:23 compute-0 sudo[280245]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:24 compute-0 sudo[280270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:16:24 compute-0 sudo[280270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.147 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.147 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.148 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.189 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:16:24 compute-0 sudo[280270]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev fb52ca85-52af-4ec2-a5dc-650677ede7a8 does not exist
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7674d9bf-bbdc-4e6c-9efc-bdfaf46d15c2 does not exist
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 0c768887-74c8-4172-950b-eefa73dfc0ec does not exist
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:16:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000410998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.653 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.659 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:16:24 compute-0 sudo[280346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:24 compute-0 sudo[280346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:24 compute-0 sudo[280346]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:24 compute-0 ceph-mon[74357]: pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/338107409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1368109675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3069265024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:16:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1000410998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:24 compute-0 sudo[280373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:16:24 compute-0 sudo[280373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:24 compute-0 sudo[280373]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:24 compute-0 sudo[280398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:24 compute-0 sudo[280398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:24 compute-0 sudo[280398]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:24.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:24 compute-0 sudo[280423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:16:24 compute-0 sudo[280423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.902 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.904 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:16:24 compute-0 nova_compute[247671]: 2026-01-27 09:16:24.904 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:16:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.119863681 +0000 UTC m=+0.020976506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.301924636 +0000 UTC m=+0.203037441 container create 916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:25 compute-0 systemd[1]: Started libpod-conmon-916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06.scope.
Jan 27 09:16:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.399700502 +0000 UTC m=+0.300813327 container init 916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.409116701 +0000 UTC m=+0.310229506 container start 916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.412061462 +0000 UTC m=+0.313174297 container attach 916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:16:25 compute-0 condescending_mayer[280506]: 167 167
Jan 27 09:16:25 compute-0 systemd[1]: libpod-916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06.scope: Deactivated successfully.
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.417341916 +0000 UTC m=+0.318454721 container died 916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-606f21ea39befa5c3a4d59e2374a878d67d84f3e3b0c20aa8b318ab05a00f350-merged.mount: Deactivated successfully.
Jan 27 09:16:25 compute-0 podman[280489]: 2026-01-27 09:16:25.457487085 +0000 UTC m=+0.358599890 container remove 916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:16:25 compute-0 systemd[1]: libpod-conmon-916b1f0df54d3fa7588ac123e05a67e7546253135eec1128abeb7275deae8a06.scope: Deactivated successfully.
Jan 27 09:16:25 compute-0 podman[280530]: 2026-01-27 09:16:25.632734904 +0000 UTC m=+0.048523340 container create 86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:16:25 compute-0 systemd[1]: Started libpod-conmon-86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1.scope.
Jan 27 09:16:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0d647ff494f54051eff50cec296c9758dc1d2851effcab5bfbf522692eb8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0d647ff494f54051eff50cec296c9758dc1d2851effcab5bfbf522692eb8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0d647ff494f54051eff50cec296c9758dc1d2851effcab5bfbf522692eb8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0d647ff494f54051eff50cec296c9758dc1d2851effcab5bfbf522692eb8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0d647ff494f54051eff50cec296c9758dc1d2851effcab5bfbf522692eb8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:25 compute-0 podman[280530]: 2026-01-27 09:16:25.691290388 +0000 UTC m=+0.107078844 container init 86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:16:25 compute-0 podman[280530]: 2026-01-27 09:16:25.700368116 +0000 UTC m=+0.116156562 container start 86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 09:16:25 compute-0 podman[280530]: 2026-01-27 09:16:25.704021556 +0000 UTC m=+0.119810002 container attach 86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 27 09:16:25 compute-0 podman[280530]: 2026-01-27 09:16:25.613617001 +0000 UTC m=+0.029405477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:16:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:26 compute-0 eager_volhard[280546]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:16:26 compute-0 eager_volhard[280546]: --> relative data size: 1.0
Jan 27 09:16:26 compute-0 eager_volhard[280546]: --> All data devices are unavailable
Jan 27 09:16:26 compute-0 systemd[1]: libpod-86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1.scope: Deactivated successfully.
Jan 27 09:16:26 compute-0 podman[280530]: 2026-01-27 09:16:26.486346147 +0000 UTC m=+0.902134593 container died 86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:16:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:16:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:16:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-daa0d647ff494f54051eff50cec296c9758dc1d2851effcab5bfbf522692eb8f-merged.mount: Deactivated successfully.
Jan 27 09:16:29 compute-0 ceph-mon[74357]: pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2927829285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1892450525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:16:29 compute-0 podman[280530]: 2026-01-27 09:16:29.734945281 +0000 UTC m=+4.150733727 container remove 86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:16:29 compute-0 systemd[1]: libpod-conmon-86539b0770816907997a333a6579f8812c991ec274c3aebccd118a2de67d7bd1.scope: Deactivated successfully.
Jan 27 09:16:29 compute-0 sudo[280423]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:29 compute-0 sudo[280577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:29 compute-0 sudo[280577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:29 compute-0 sudo[280577]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:29 compute-0 sudo[280602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:16:29 compute-0 sudo[280602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:29 compute-0 sudo[280602]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:29 compute-0 sudo[280627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:29 compute-0 sudo[280627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:29 compute-0 sudo[280627]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:29 compute-0 sudo[280652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:16:30 compute-0 sudo[280652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.284567831 +0000 UTC m=+0.035993137 container create 10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:16:30 compute-0 systemd[1]: Started libpod-conmon-10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1.scope.
Jan 27 09:16:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.269788137 +0000 UTC m=+0.021213463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.368252503 +0000 UTC m=+0.119677839 container init 10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.374979756 +0000 UTC m=+0.126405062 container start 10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:16:30 compute-0 nifty_bohr[280733]: 167 167
Jan 27 09:16:30 compute-0 systemd[1]: libpod-10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1.scope: Deactivated successfully.
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.381502326 +0000 UTC m=+0.132927632 container attach 10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.382520193 +0000 UTC m=+0.133945509 container died 10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0a7fe0a1415e9055d6bb41f2613c9a7d6276c754aa8e67ca2e880cb73b137ad-merged.mount: Deactivated successfully.
Jan 27 09:16:30 compute-0 podman[280716]: 2026-01-27 09:16:30.419766403 +0000 UTC m=+0.171191709 container remove 10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:16:30 compute-0 systemd[1]: libpod-conmon-10da393f6d49beeee4f927fbf107294521f0a7934a205a5b5adfb2f33efdace1.scope: Deactivated successfully.
Jan 27 09:16:30 compute-0 podman[280759]: 2026-01-27 09:16:30.594046646 +0000 UTC m=+0.041882548 container create 23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:16:30 compute-0 systemd[1]: Started libpod-conmon-23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3.scope.
Jan 27 09:16:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3f02f3b1da9d54af64d071bcf678585946151be61569ebf8323c3562114930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3f02f3b1da9d54af64d071bcf678585946151be61569ebf8323c3562114930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3f02f3b1da9d54af64d071bcf678585946151be61569ebf8323c3562114930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3f02f3b1da9d54af64d071bcf678585946151be61569ebf8323c3562114930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:30 compute-0 podman[280759]: 2026-01-27 09:16:30.575497657 +0000 UTC m=+0.023333589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:16:30 compute-0 podman[280759]: 2026-01-27 09:16:30.680628897 +0000 UTC m=+0.128464819 container init 23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shockley, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:16:30 compute-0 podman[280759]: 2026-01-27 09:16:30.687498724 +0000 UTC m=+0.135334636 container start 23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shockley, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:16:30 compute-0 podman[280759]: 2026-01-27 09:16:30.690884117 +0000 UTC m=+0.138720029 container attach 23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:16:30 compute-0 ceph-mon[74357]: pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:30 compute-0 ceph-mon[74357]: pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:30 compute-0 nova_compute[247671]: 2026-01-27 09:16:30.904 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:30 compute-0 nova_compute[247671]: 2026-01-27 09:16:30.905 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:16:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:31 compute-0 strange_shockley[280775]: {
Jan 27 09:16:31 compute-0 strange_shockley[280775]:     "0": [
Jan 27 09:16:31 compute-0 strange_shockley[280775]:         {
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "devices": [
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "/dev/loop3"
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             ],
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "lv_name": "ceph_lv0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "lv_size": "7511998464",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "name": "ceph_lv0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "tags": {
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.cluster_name": "ceph",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.crush_device_class": "",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.encrypted": "0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.osd_id": "0",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.type": "block",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:                 "ceph.vdo": "0"
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             },
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "type": "block",
Jan 27 09:16:31 compute-0 strange_shockley[280775]:             "vg_name": "ceph_vg0"
Jan 27 09:16:31 compute-0 strange_shockley[280775]:         }
Jan 27 09:16:31 compute-0 strange_shockley[280775]:     ]
Jan 27 09:16:31 compute-0 strange_shockley[280775]: }
Jan 27 09:16:31 compute-0 systemd[1]: libpod-23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3.scope: Deactivated successfully.
Jan 27 09:16:31 compute-0 podman[280784]: 2026-01-27 09:16:31.507128198 +0000 UTC m=+0.022174818 container died 23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb3f02f3b1da9d54af64d071bcf678585946151be61569ebf8323c3562114930-merged.mount: Deactivated successfully.
Jan 27 09:16:31 compute-0 podman[280784]: 2026-01-27 09:16:31.551653016 +0000 UTC m=+0.066699616 container remove 23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:16:31 compute-0 systemd[1]: libpod-conmon-23d2634b05397830c39101642240d097e6eef0be9e3c764c1faed8d7bff300d3.scope: Deactivated successfully.
Jan 27 09:16:31 compute-0 sudo[280652]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:31 compute-0 sudo[280799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:31 compute-0 sudo[280799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:31 compute-0 sudo[280799]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:31 compute-0 sudo[280824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:16:31 compute-0 sudo[280824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:31 compute-0 sudo[280824]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:31 compute-0 sudo[280849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:31 compute-0 sudo[280849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:31 compute-0 sudo[280849]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:31 compute-0 sudo[280874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:16:31 compute-0 sudo[280874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.098867921 +0000 UTC m=+0.037747055 container create 8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:16:32 compute-0 systemd[1]: Started libpod-conmon-8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce.scope.
Jan 27 09:16:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.165958958 +0000 UTC m=+0.104838112 container init 8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_moser, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.173786703 +0000 UTC m=+0.112665837 container start 8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:16:32 compute-0 interesting_moser[280957]: 167 167
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.176927448 +0000 UTC m=+0.115806602 container attach 8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_moser, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.082009109 +0000 UTC m=+0.020888263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:16:32 compute-0 systemd[1]: libpod-8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce.scope: Deactivated successfully.
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.177860724 +0000 UTC m=+0.116739878 container died 8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3baa5512fc294a7e27f80c312ed42ed31c251de7901d71a6c5bba65bd1ed1ff9-merged.mount: Deactivated successfully.
Jan 27 09:16:32 compute-0 podman[280941]: 2026-01-27 09:16:32.220207763 +0000 UTC m=+0.159086907 container remove 8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_moser, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 09:16:32 compute-0 systemd[1]: libpod-conmon-8a878baaf6c52793ccc06359c955738111cd115a392dad5331d36f993b40b4ce.scope: Deactivated successfully.
Jan 27 09:16:32 compute-0 podman[280980]: 2026-01-27 09:16:32.400293634 +0000 UTC m=+0.047425720 container create 81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:16:32 compute-0 systemd[1]: Started libpod-conmon-81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48.scope.
Jan 27 09:16:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0b498ac630f656c6acd9a0a66472c4c7214a92b9365a6e057971de7888c497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0b498ac630f656c6acd9a0a66472c4c7214a92b9365a6e057971de7888c497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0b498ac630f656c6acd9a0a66472c4c7214a92b9365a6e057971de7888c497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0b498ac630f656c6acd9a0a66472c4c7214a92b9365a6e057971de7888c497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:32 compute-0 podman[280980]: 2026-01-27 09:16:32.38480947 +0000 UTC m=+0.031941586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:16:32 compute-0 podman[280980]: 2026-01-27 09:16:32.486289219 +0000 UTC m=+0.133421305 container init 81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 27 09:16:32 compute-0 podman[280980]: 2026-01-27 09:16:32.49180133 +0000 UTC m=+0.138933416 container start 81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:16:32 compute-0 podman[280980]: 2026-01-27 09:16:32.494645688 +0000 UTC m=+0.141777764 container attach 81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:16:32 compute-0 ceph-mon[74357]: pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]: {
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:         "osd_id": 0,
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:         "type": "bluestore"
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]:     }
Jan 27 09:16:33 compute-0 heuristic_boyd[280996]: }
Jan 27 09:16:33 compute-0 systemd[1]: libpod-81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48.scope: Deactivated successfully.
Jan 27 09:16:33 compute-0 podman[281018]: 2026-01-27 09:16:33.354076401 +0000 UTC m=+0.021426748 container died 81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc0b498ac630f656c6acd9a0a66472c4c7214a92b9365a6e057971de7888c497-merged.mount: Deactivated successfully.
Jan 27 09:16:33 compute-0 podman[281018]: 2026-01-27 09:16:33.41393005 +0000 UTC m=+0.081280377 container remove 81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 27 09:16:33 compute-0 systemd[1]: libpod-conmon-81d7247bc310b682e6e7396aca5dcb3b8c6bb925dc29aa68d98484c1584d7f48.scope: Deactivated successfully.
Jan 27 09:16:33 compute-0 sudo[280874]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:16:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:16:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:16:33 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:16:33 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b559a954-a3a4-41ce-a59c-d8f238909964 does not exist
Jan 27 09:16:33 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev bb278a95-7eb5-48ab-b360-e7fa78d47386 does not exist
Jan 27 09:16:33 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c2aa0da5-8196-49ba-a09b-fa0226acc4b1 does not exist
Jan 27 09:16:33 compute-0 sudo[281033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:33 compute-0 sudo[281033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:33 compute-0 sudo[281033]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:33 compute-0 sudo[281058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:16:33 compute-0 sudo[281058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:33 compute-0 sudo[281058]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:33 compute-0 sudo[281083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:33 compute-0 sudo[281083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:33 compute-0 sudo[281083]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:33 compute-0 sudo[281108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:33 compute-0 sudo[281108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:33 compute-0 sudo[281108]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:34 compute-0 ceph-mon[74357]: pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:16:34 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:16:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:34.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:35 compute-0 podman[281134]: 2026-01-27 09:16:35.267747943 +0000 UTC m=+0.086024217 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:16:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:36 compute-0 ceph-mon[74357]: pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:36.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:36.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:38 compute-0 ceph-mon[74357]: pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:38.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:38.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:40 compute-0 ceph-mon[74357]: pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:16:40 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:40 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:40 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:40.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 09:16:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:41.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:42 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:42 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:42 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:42.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:42 compute-0 ceph-mon[74357]: pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 09:16:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 09:16:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:43.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:44 compute-0 ceph-mon[74357]: pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 09:16:44 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:44 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:44 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 09:16:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:45.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:46 compute-0 ceph-mon[74357]: pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 27 09:16:46 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:46 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:46 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 65 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 40 op/s
Jan 27 09:16:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:48 compute-0 ceph-mon[74357]: pgmap v1624: 305 pgs: 305 active+clean; 65 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 40 op/s
Jan 27 09:16:48 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:48 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:48 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 65 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 40 op/s
Jan 27 09:16:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:49.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:50 compute-0 podman[281169]: 2026-01-27 09:16:50.233689907 +0000 UTC m=+0.052490598 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 09:16:50 compute-0 ceph-mon[74357]: pgmap v1625: 305 pgs: 305 active+clean; 65 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 40 op/s
Jan 27 09:16:50 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:50 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:50 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:50.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:16:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:51.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.425912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505411425970, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2194, "num_deletes": 256, "total_data_size": 3907207, "memory_usage": 3967360, "flush_reason": "Manual Compaction"}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505411515121, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3827586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33729, "largest_seqno": 35922, "table_properties": {"data_size": 3817562, "index_size": 6390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20810, "raw_average_key_size": 20, "raw_value_size": 3797516, "raw_average_value_size": 3789, "num_data_blocks": 278, "num_entries": 1002, "num_filter_entries": 1002, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769505202, "oldest_key_time": 1769505202, "file_creation_time": 1769505411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 89269 microseconds, and 8263 cpu microseconds.
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.515184) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3827586 bytes OK
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.515209) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.552140) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.552186) EVENT_LOG_v1 {"time_micros": 1769505411552172, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.552216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3898265, prev total WAL file size 3898265, number of live WAL files 2.
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.554101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3737KB)], [74(8692KB)]
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505411554144, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12728332, "oldest_snapshot_seqno": -1}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6088 keys, 10749169 bytes, temperature: kUnknown
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505411853164, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10749169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10707714, "index_size": 25153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 155034, "raw_average_key_size": 25, "raw_value_size": 10597020, "raw_average_value_size": 1740, "num_data_blocks": 1019, "num_entries": 6088, "num_filter_entries": 6088, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769505411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.854211) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10749169 bytes
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.904410) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.5 rd, 35.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.5 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6616, records dropped: 528 output_compression: NoCompression
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.904456) EVENT_LOG_v1 {"time_micros": 1769505411904439, "job": 42, "event": "compaction_finished", "compaction_time_micros": 299181, "compaction_time_cpu_micros": 43149, "output_level": 6, "num_output_files": 1, "total_output_size": 10749169, "num_input_records": 6616, "num_output_records": 6088, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505411905563, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505411907642, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.554031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.907750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.907756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.907757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.907759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:16:51 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:16:51.907760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:16:52 compute-0 ceph-mon[74357]: pgmap v1626: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:16:52 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:52 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:52 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:52.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 09:16:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:53 compute-0 sudo[281190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:53 compute-0 sudo[281190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:53 compute-0 sudo[281190]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:53 compute-0 sudo[281215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:16:53 compute-0 sudo[281215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:16:53 compute-0 sudo[281215]: pam_unix(sudo:session): session closed for user root
Jan 27 09:16:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:16:54.256 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:16:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:16:54.256 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:16:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:16:54.256 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:16:54 compute-0 ceph-mon[74357]: pgmap v1627: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 09:16:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1277575508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:16:54 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1277575508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:16:54 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:54 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:54 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 65 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:16:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:56 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:56 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:56 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:56.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:56 compute-0 ceph-mon[74357]: pgmap v1628: 305 pgs: 305 active+clean; 65 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 09:16:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 09:16:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:16:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:16:57 compute-0 ceph-mon[74357]: pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 27 09:16:58 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:58 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:58 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:16:58.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 765 KiB/s wr, 16 op/s
Jan 27 09:16:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:16:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:16:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:16:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:16:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:16:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1658543422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:16:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:16:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1658543422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:17:00 compute-0 ceph-mon[74357]: pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 765 KiB/s wr, 16 op/s
Jan 27 09:17:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1658543422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:17:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1658543422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:17:00 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:00 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:00 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 765 KiB/s wr, 16 op/s
Jan 27 09:17:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:02 compute-0 ceph-mon[74357]: pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 765 KiB/s wr, 16 op/s
Jan 27 09:17:02 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:02 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:02 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:02.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:17:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:03.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:04 compute-0 ceph-mon[74357]: pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:17:04 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:04 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:04 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:17:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:06 compute-0 ceph-mon[74357]: pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:17:06 compute-0 podman[281246]: 2026-01-27 09:17:06.301872586 +0000 UTC m=+0.111520704 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:17:06 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:06 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:06 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:17:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:08 compute-0 ceph-mon[74357]: pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 27 09:17:08 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:08 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:08 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:10 compute-0 ceph-mon[74357]: pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:10 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:10 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:10 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:11.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:12 compute-0 ceph-mon[74357]: pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:12 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:12 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:12 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:12.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:13.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:13 compute-0 nova_compute[247671]: 2026-01-27 09:17:13.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:14 compute-0 sudo[281276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:14 compute-0 sudo[281276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:14 compute-0 sudo[281276]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:14 compute-0 sudo[281301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:14 compute-0 sudo[281301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:14 compute-0 sudo[281301]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:14 compute-0 ceph-mon[74357]: pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:14 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:14 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:14 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:17:15
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'images', 'vms', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data']
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:17:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:17:15 compute-0 nova_compute[247671]: 2026-01-27 09:17:15.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:16 compute-0 nova_compute[247671]: 2026-01-27 09:17:16.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:16 compute-0 nova_compute[247671]: 2026-01-27 09:17:16.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:17:16 compute-0 ceph-mon[74357]: pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:16 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:16 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:16 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:16.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:18 compute-0 nova_compute[247671]: 2026-01-27 09:17:18.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:18 compute-0 nova_compute[247671]: 2026-01-27 09:17:18.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:18 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:18 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:18 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:18.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:18 compute-0 ceph-mon[74357]: pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:20 compute-0 ceph-mon[74357]: pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:20 compute-0 nova_compute[247671]: 2026-01-27 09:17:20.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:20 compute-0 nova_compute[247671]: 2026-01-27 09:17:20.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:17:20 compute-0 nova_compute[247671]: 2026-01-27 09:17:20.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:17:20 compute-0 nova_compute[247671]: 2026-01-27 09:17:20.460 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:17:20 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:20 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:20 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:20.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:21 compute-0 podman[281330]: 2026-01-27 09:17:21.248003203 +0000 UTC m=+0.061541146 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 27 09:17:21 compute-0 nova_compute[247671]: 2026-01-27 09:17:21.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:22 compute-0 ceph-mon[74357]: pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:22 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:17:22.377 159876 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '22:4d:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:6d:91:46:e6:fc'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 09:17:22 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:17:22.377 159876 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 09:17:22 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:22 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 27 09:17:22 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:22.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 27 09:17:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:23.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.448 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.449 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.449 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.449 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:17:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:17:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155226448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:23 compute-0 nova_compute[247671]: 2026-01-27 09:17:23.908 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.057 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.057 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.058 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.058 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.163 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.163 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.164 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.211 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:17:24 compute-0 ceph-mon[74357]: pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3155226448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2229656043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:17:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:17:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087781389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.661 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.667 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.681 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.683 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:17:24 compute-0 nova_compute[247671]: 2026-01-27 09:17:24.683 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:17:24 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:24 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:24 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:24.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3087781389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1126501923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:17:26 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8339 writes, 36K keys, 8339 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8339 writes, 8339 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1505 writes, 6436 keys, 1505 commit groups, 1.0 writes per commit group, ingest: 10.25 MB, 0.02 MB/s
                                           Interval WAL: 1505 writes, 1505 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     59.0      0.83              0.13        21    0.040       0      0       0.0       0.0
                                             L6      1/0   10.25 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.5    124.4    101.6      1.70              0.46        20    0.085    104K    11K       0.0       0.0
                                            Sum      1/0   10.25 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.5     83.6     87.6      2.53              0.59        41    0.062    104K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     67.3     69.5      0.69              0.13         8    0.086     25K   2568       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    124.4    101.6      1.70              0.46        20    0.085    104K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     59.4      0.82              0.13        20    0.041       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.4      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.048, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.07 MB/s write, 0.21 GB read, 0.07 MB/s read, 2.5 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f59eb431f0#2 capacity: 304.00 MB usage: 25.04 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000175 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1474,24.23 MB,7.96888%) FilterBlock(42,298.55 KB,0.0959045%) IndexBlock(42,531.30 KB,0.170673%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 09:17:26 compute-0 ceph-mon[74357]: pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2867258129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1624351934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:17:26 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:26 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:26 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 27 09:17:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 27 09:17:28 compute-0 ceph-mon[74357]: pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:28 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:28 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:28 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:28.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:29.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:29 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:17:29.379 159876 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fd496359-7f94-4196-96c9-9e7fb7c843a0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 09:17:29 compute-0 ceph-mon[74357]: pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:30 compute-0 nova_compute[247671]: 2026-01-27 09:17:30.684 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:30 compute-0 nova_compute[247671]: 2026-01-27 09:17:30.685 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:17:30 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:30 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:30 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:30.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:31.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:32 compute-0 ceph-mon[74357]: pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:32 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:32 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:32 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:32.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:33 compute-0 sudo[281400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:33 compute-0 sudo[281400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:33 compute-0 sudo[281400]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:33 compute-0 sudo[281425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:17:33 compute-0 sudo[281425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:33 compute-0 sudo[281425]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:34 compute-0 sudo[281450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:34 compute-0 sudo[281450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:34 compute-0 sudo[281450]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:34 compute-0 sudo[281475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:17:34 compute-0 sudo[281475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:34 compute-0 sudo[281500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:34 compute-0 sudo[281500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:34 compute-0 sudo[281500]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:34 compute-0 ceph-mon[74357]: pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:34 compute-0 sudo[281526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:34 compute-0 sudo[281526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:34 compute-0 sudo[281526]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:34 compute-0 sudo[281475]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:34 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:34 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:34 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:34.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:17:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:17:36 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:36 compute-0 ceph-mon[74357]: pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:36 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:36 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:36 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:36 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:17:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:17:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:17:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 311d1dcc-730a-4307-a4a0-e9de593fd433 does not exist
Jan 27 09:17:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 05416556-a4db-433d-a6bd-6ed2be6d203a does not exist
Jan 27 09:17:37 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e956fc28-111b-4dba-8575-5677243284e3 does not exist
Jan 27 09:17:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:17:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:17:37 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:17:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:17:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:37 compute-0 sudo[281583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:37 compute-0 sudo[281583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:37 compute-0 sudo[281583]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:37 compute-0 sudo[281614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:17:37 compute-0 sudo[281614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:37 compute-0 sudo[281614]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:37 compute-0 podman[281607]: 2026-01-27 09:17:37.179845612 +0000 UTC m=+0.080794604 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:37 compute-0 sudo[281652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:37 compute-0 sudo[281652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:37 compute-0 sudo[281652]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:17:37 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:17:37 compute-0 sudo[281682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:17:37 compute-0 sudo[281682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.549552805 +0000 UTC m=+0.036786879 container create 77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:17:37 compute-0 systemd[1]: Started libpod-conmon-77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde.scope.
Jan 27 09:17:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.611440869 +0000 UTC m=+0.098674953 container init 77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.618145173 +0000 UTC m=+0.105379247 container start 77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.621302919 +0000 UTC m=+0.108537013 container attach 77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:17:37 compute-0 hungry_lumiere[281765]: 167 167
Jan 27 09:17:37 compute-0 systemd[1]: libpod-77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde.scope: Deactivated successfully.
Jan 27 09:17:37 compute-0 conmon[281765]: conmon 77ee557afa09035487de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde.scope/container/memory.events
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.624552298 +0000 UTC m=+0.111786372 container died 77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.532316402 +0000 UTC m=+0.019550506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:17:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d764f201ac79c4cb9d5102e0d1b70a785deb8970be86ba911026c9333408e4af-merged.mount: Deactivated successfully.
Jan 27 09:17:37 compute-0 podman[281749]: 2026-01-27 09:17:37.663498584 +0000 UTC m=+0.150732658 container remove 77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:17:37 compute-0 systemd[1]: libpod-conmon-77ee557afa09035487dedd6b63d9fb1d4ad35d6c534244f1ccc8103335a0ccde.scope: Deactivated successfully.
Jan 27 09:17:37 compute-0 podman[281790]: 2026-01-27 09:17:37.812968718 +0000 UTC m=+0.042728322 container create dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:37 compute-0 systemd[1]: Started libpod-conmon-dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879.scope.
Jan 27 09:17:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f71f2ee04c1871a38961214762fab99960db0e399f04677e41c71e23009211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f71f2ee04c1871a38961214762fab99960db0e399f04677e41c71e23009211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f71f2ee04c1871a38961214762fab99960db0e399f04677e41c71e23009211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f71f2ee04c1871a38961214762fab99960db0e399f04677e41c71e23009211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f71f2ee04c1871a38961214762fab99960db0e399f04677e41c71e23009211/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:37 compute-0 podman[281790]: 2026-01-27 09:17:37.796449875 +0000 UTC m=+0.026209499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:17:37 compute-0 podman[281790]: 2026-01-27 09:17:37.898406887 +0000 UTC m=+0.128166501 container init dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:17:37 compute-0 podman[281790]: 2026-01-27 09:17:37.90438456 +0000 UTC m=+0.134144164 container start dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:17:37 compute-0 podman[281790]: 2026-01-27 09:17:37.907381162 +0000 UTC m=+0.137140766 container attach dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:38 compute-0 ceph-mon[74357]: pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:38 compute-0 objective_cohen[281806]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:17:38 compute-0 objective_cohen[281806]: --> relative data size: 1.0
Jan 27 09:17:38 compute-0 objective_cohen[281806]: --> All data devices are unavailable
Jan 27 09:17:38 compute-0 systemd[1]: libpod-dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879.scope: Deactivated successfully.
Jan 27 09:17:38 compute-0 podman[281790]: 2026-01-27 09:17:38.704792667 +0000 UTC m=+0.934552271 container died dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 09:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5f71f2ee04c1871a38961214762fab99960db0e399f04677e41c71e23009211-merged.mount: Deactivated successfully.
Jan 27 09:17:38 compute-0 podman[281790]: 2026-01-27 09:17:38.758341884 +0000 UTC m=+0.988101488 container remove dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 09:17:38 compute-0 systemd[1]: libpod-conmon-dba71e55f915307214f83c198610739621073ee85ebdaf693145a063ff5aa879.scope: Deactivated successfully.
Jan 27 09:17:38 compute-0 sudo[281682]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:38 compute-0 sudo[281836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:38 compute-0 sudo[281836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:38 compute-0 sudo[281836]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:38 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:38 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:38 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:38.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:38 compute-0 sudo[281861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:17:38 compute-0 sudo[281861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:38 compute-0 sudo[281861]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:38 compute-0 sudo[281886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:38 compute-0 sudo[281886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:38 compute-0 sudo[281886]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:39 compute-0 sudo[281911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:17:39 compute-0 sudo[281911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.301180358 +0000 UTC m=+0.040462550 container create b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:17:39 compute-0 systemd[1]: Started libpod-conmon-b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32.scope.
Jan 27 09:17:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.282843685 +0000 UTC m=+0.022125897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.382669399 +0000 UTC m=+0.121951611 container init b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.389990339 +0000 UTC m=+0.129272521 container start b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.393946828 +0000 UTC m=+0.133229010 container attach b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:39 compute-0 romantic_jones[281993]: 167 167
Jan 27 09:17:39 compute-0 systemd[1]: libpod-b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32.scope: Deactivated successfully.
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.39621539 +0000 UTC m=+0.135497612 container died b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-04fb5aff46d04bbf028a71e021583b949eea879956c0cae151efb91016427718-merged.mount: Deactivated successfully.
Jan 27 09:17:39 compute-0 podman[281976]: 2026-01-27 09:17:39.433561163 +0000 UTC m=+0.172843345 container remove b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 09:17:39 compute-0 systemd[1]: libpod-conmon-b981b6b1d15d34f1685b4c33f6a99914c776e2e895f41fdbd98cc12ef6a80d32.scope: Deactivated successfully.
Jan 27 09:17:39 compute-0 podman[282017]: 2026-01-27 09:17:39.591845017 +0000 UTC m=+0.040317715 container create deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pascal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:17:39 compute-0 systemd[1]: Started libpod-conmon-deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060.scope.
Jan 27 09:17:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a13f1fb714f5cc54c1bd867010e8c44e99a557eb1c0583a11b8f312809084a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a13f1fb714f5cc54c1bd867010e8c44e99a557eb1c0583a11b8f312809084a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a13f1fb714f5cc54c1bd867010e8c44e99a557eb1c0583a11b8f312809084a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a13f1fb714f5cc54c1bd867010e8c44e99a557eb1c0583a11b8f312809084a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:39 compute-0 podman[282017]: 2026-01-27 09:17:39.575675614 +0000 UTC m=+0.024148302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:17:39 compute-0 podman[282017]: 2026-01-27 09:17:39.673054021 +0000 UTC m=+0.121526739 container init deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:17:39 compute-0 podman[282017]: 2026-01-27 09:17:39.67961444 +0000 UTC m=+0.128087138 container start deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pascal, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:17:39 compute-0 podman[282017]: 2026-01-27 09:17:39.682827228 +0000 UTC m=+0.131299926 container attach deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 27 09:17:40 compute-0 ceph-mon[74357]: pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5d8f6f0 =====
Jan 27 09:17:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5d8f6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:41 compute-0 radosgw[92542]: beast: 0x7f84d5d8f6f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:41 compute-0 gallant_pascal[282033]: {
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:     "0": [
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:         {
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "devices": [
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "/dev/loop3"
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             ],
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "lv_name": "ceph_lv0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "lv_size": "7511998464",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "name": "ceph_lv0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "tags": {
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.cluster_name": "ceph",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.crush_device_class": "",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.encrypted": "0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.osd_id": "0",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.type": "block",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:                 "ceph.vdo": "0"
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             },
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "type": "block",
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:             "vg_name": "ceph_vg0"
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:         }
Jan 27 09:17:41 compute-0 gallant_pascal[282033]:     ]
Jan 27 09:17:41 compute-0 gallant_pascal[282033]: }
Jan 27 09:17:41 compute-0 systemd[1]: libpod-deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060.scope: Deactivated successfully.
Jan 27 09:17:41 compute-0 podman[282017]: 2026-01-27 09:17:41.273313439 +0000 UTC m=+1.721786147 container died deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:41 compute-0 ceph-mon[74357]: pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a13f1fb714f5cc54c1bd867010e8c44e99a557eb1c0583a11b8f312809084a-merged.mount: Deactivated successfully.
Jan 27 09:17:41 compute-0 podman[282017]: 2026-01-27 09:17:41.323235146 +0000 UTC m=+1.771707844 container remove deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 09:17:41 compute-0 systemd[1]: libpod-conmon-deb065b1b0518e4fcd8302f78cc23f500122d1f3b0220d30ba7d78383bbd0060.scope: Deactivated successfully.
Jan 27 09:17:41 compute-0 sudo[281911]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:41 compute-0 sudo[282054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:41 compute-0 sudo[282054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:41 compute-0 sudo[282054]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:41 compute-0 sudo[282079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:17:41 compute-0 sudo[282079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:41 compute-0 sudo[282079]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:41 compute-0 sudo[282104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:41 compute-0 sudo[282104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:41 compute-0 sudo[282104]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:41 compute-0 sudo[282129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:17:41 compute-0 sudo[282129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.864096076 +0000 UTC m=+0.037869538 container create a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hertz, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:41 compute-0 systemd[1]: Started libpod-conmon-a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae.scope.
Jan 27 09:17:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.925618011 +0000 UTC m=+0.099391503 container init a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.931031879 +0000 UTC m=+0.104805351 container start a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hertz, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.934560956 +0000 UTC m=+0.108334438 container attach a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:41 compute-0 great_hertz[282209]: 167 167
Jan 27 09:17:41 compute-0 systemd[1]: libpod-a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae.scope: Deactivated successfully.
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.936753376 +0000 UTC m=+0.110526848 container died a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.849410993 +0000 UTC m=+0.023184485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-014c3bda65b4317728d1f70242b3bcc4ec90c15e1e250a86ebe98572c3846e98-merged.mount: Deactivated successfully.
Jan 27 09:17:41 compute-0 podman[282193]: 2026-01-27 09:17:41.972614797 +0000 UTC m=+0.146388259 container remove a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 27 09:17:41 compute-0 systemd[1]: libpod-conmon-a28a5f9809922ecef5787a34530dfdd8dcae5f16a799c3e511d3bf660c85d1ae.scope: Deactivated successfully.
Jan 27 09:17:42 compute-0 podman[282233]: 2026-01-27 09:17:42.13524579 +0000 UTC m=+0.042250958 container create a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 27 09:17:42 compute-0 systemd[1]: Started libpod-conmon-a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9.scope.
Jan 27 09:17:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a1c88fad2a817a2288c18c703fffff5999bb6b300445ff5ec819677663d7ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a1c88fad2a817a2288c18c703fffff5999bb6b300445ff5ec819677663d7ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a1c88fad2a817a2288c18c703fffff5999bb6b300445ff5ec819677663d7ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a1c88fad2a817a2288c18c703fffff5999bb6b300445ff5ec819677663d7ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:42 compute-0 podman[282233]: 2026-01-27 09:17:42.205021781 +0000 UTC m=+0.112026969 container init a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:17:42 compute-0 podman[282233]: 2026-01-27 09:17:42.212876986 +0000 UTC m=+0.119882154 container start a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:17:42 compute-0 podman[282233]: 2026-01-27 09:17:42.119802738 +0000 UTC m=+0.026807936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:17:42 compute-0 podman[282233]: 2026-01-27 09:17:42.216692051 +0000 UTC m=+0.123697229 container attach a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:17:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:43 compute-0 suspicious_jones[282249]: {
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:         "osd_id": 0,
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:         "type": "bluestore"
Jan 27 09:17:43 compute-0 suspicious_jones[282249]:     }
Jan 27 09:17:43 compute-0 suspicious_jones[282249]: }
Jan 27 09:17:43 compute-0 systemd[1]: libpod-a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9.scope: Deactivated successfully.
Jan 27 09:17:43 compute-0 podman[282233]: 2026-01-27 09:17:43.026732782 +0000 UTC m=+0.933737950 container died a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a1c88fad2a817a2288c18c703fffff5999bb6b300445ff5ec819677663d7ce-merged.mount: Deactivated successfully.
Jan 27 09:17:43 compute-0 podman[282233]: 2026-01-27 09:17:43.071404425 +0000 UTC m=+0.978409593 container remove a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:17:43 compute-0 systemd[1]: libpod-conmon-a49485320654b54b98681329e68eee4bb52264e0a04cb18ddecfd7bd2fb229d9.scope: Deactivated successfully.
Jan 27 09:17:43 compute-0 sudo[282129]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:17:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:17:43 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f1cf9237-469c-49ab-8720-3b902b07132f does not exist
Jan 27 09:17:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5797fac2-d654-488c-8f13-4f53e902f574 does not exist
Jan 27 09:17:43 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f7229288-0960-4240-ba88-b83f607ac621 does not exist
Jan 27 09:17:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:43.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:43 compute-0 sudo[282285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:43 compute-0 sudo[282285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:43 compute-0 sudo[282285]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:43 compute-0 sudo[282310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:17:43 compute-0 sudo[282310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:43 compute-0 sudo[282310]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:44 compute-0 ceph-mon[74357]: pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:44 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:17:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:45.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:45 compute-0 ceph-mon[74357]: pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.199215) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505466199275, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 726, "num_deletes": 256, "total_data_size": 1009723, "memory_usage": 1023368, "flush_reason": "Manual Compaction"}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505466206203, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 999349, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35923, "largest_seqno": 36648, "table_properties": {"data_size": 995541, "index_size": 1588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8538, "raw_average_key_size": 19, "raw_value_size": 987878, "raw_average_value_size": 2219, "num_data_blocks": 69, "num_entries": 445, "num_filter_entries": 445, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769505412, "oldest_key_time": 1769505412, "file_creation_time": 1769505466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 7030 microseconds, and 3056 cpu microseconds.
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.206251) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 999349 bytes OK
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.206271) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.208336) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.208355) EVENT_LOG_v1 {"time_micros": 1769505466208349, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.208374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1006029, prev total WAL file size 1006029, number of live WAL files 2.
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.208905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(975KB)], [77(10MB)]
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505466208926, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11748518, "oldest_snapshot_seqno": -1}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6006 keys, 11617263 bytes, temperature: kUnknown
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505466267359, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11617263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11574982, "index_size": 26171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 154285, "raw_average_key_size": 25, "raw_value_size": 11464434, "raw_average_value_size": 1908, "num_data_blocks": 1060, "num_entries": 6006, "num_filter_entries": 6006, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769505466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.267574) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11617263 bytes
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.268675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.8 rd, 198.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.3 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(23.4) write-amplify(11.6) OK, records in: 6533, records dropped: 527 output_compression: NoCompression
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.268695) EVENT_LOG_v1 {"time_micros": 1769505466268685, "job": 44, "event": "compaction_finished", "compaction_time_micros": 58500, "compaction_time_cpu_micros": 22837, "output_level": 6, "num_output_files": 1, "total_output_size": 11617263, "num_input_records": 6533, "num_output_records": 6006, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505466269095, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505466270804, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.208808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.270954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.270961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.270962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.270964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:17:46 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:17:46.270966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:17:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:47.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:47.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:47 compute-0 ceph-mon[74357]: pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:49.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:50 compute-0 ceph-mon[74357]: pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:51.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:17:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:51.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:17:52 compute-0 podman[282339]: 2026-01-27 09:17:52.235083125 +0000 UTC m=+0.052362745 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:17:52 compute-0 ceph-mon[74357]: pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:53.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:53 compute-0 ceph-mon[74357]: pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:17:54.257 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:17:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:17:54.258 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:17:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:17:54.258 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:17:54 compute-0 sudo[282358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:54 compute-0 sudo[282358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:54 compute-0 sudo[282358]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:54 compute-0 sudo[282383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:17:54 compute-0 sudo[282383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:17:54 compute-0 sudo[282383]: pam_unix(sudo:session): session closed for user root
Jan 27 09:17:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:56 compute-0 ceph-mon[74357]: pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:17:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:57.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:17:58 compute-0 ceph-mon[74357]: pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:17:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:17:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:17:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:17:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:17:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:17:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:00 compute-0 ceph-mon[74357]: pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1933983778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:18:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1933983778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:18:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:02 compute-0 ceph-mon[74357]: pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:03.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:03 compute-0 ceph-mon[74357]: pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:06 compute-0 ceph-mon[74357]: pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:07.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:07.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:08 compute-0 ceph-mon[74357]: pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:08 compute-0 podman[282415]: 2026-01-27 09:18:08.272866505 +0000 UTC m=+0.088257208 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:18:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:09.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:09.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:10 compute-0 ceph-mon[74357]: pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:11.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:11.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:12 compute-0 ceph-mon[74357]: pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:13.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:13 compute-0 ceph-mon[74357]: pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:14 compute-0 sudo[282444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:14 compute-0 sudo[282444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:14 compute-0 sudo[282444]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:14 compute-0 sudo[282470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:14 compute-0 sudo[282470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:14 compute-0 sudo[282470]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:18:15
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root']
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:18:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:18:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:18:15 compute-0 nova_compute[247671]: 2026-01-27 09:18:15.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:16 compute-0 ceph-mon[74357]: pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:16 compute-0 nova_compute[247671]: 2026-01-27 09:18:16.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:18:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:18:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:17 compute-0 nova_compute[247671]: 2026-01-27 09:18:17.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:17 compute-0 nova_compute[247671]: 2026-01-27 09:18:17.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:18:18 compute-0 ceph-mon[74357]: pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:18 compute-0 nova_compute[247671]: 2026-01-27 09:18:18.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:19.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:20 compute-0 ceph-mon[74357]: pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:21.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:21.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:21 compute-0 ceph-mon[74357]: pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:21 compute-0 nova_compute[247671]: 2026-01-27 09:18:21.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:21 compute-0 nova_compute[247671]: 2026-01-27 09:18:21.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:18:21 compute-0 nova_compute[247671]: 2026-01-27 09:18:21.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:18:21 compute-0 nova_compute[247671]: 2026-01-27 09:18:21.437 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:18:21 compute-0 nova_compute[247671]: 2026-01-27 09:18:21.437 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:23.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:23.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:23 compute-0 podman[282500]: 2026-01-27 09:18:23.224255486 +0000 UTC m=+0.044114409 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:23 compute-0 ceph-mon[74357]: pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.452 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.453 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.453 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:18:23 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:18:23 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721057455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:23 compute-0 nova_compute[247671]: 2026-01-27 09:18:23.917 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.098 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.100 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.100 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.100 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.213 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.213 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.213 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.281 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:18:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3721057455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:24 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2724527679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:18:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:18:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831401175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.708 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.714 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.732 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.734 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:18:24 compute-0 nova_compute[247671]: 2026-01-27 09:18:24.734 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:18:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:25.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:25.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2831401175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1942472864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:25 compute-0 ceph-mon[74357]: pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/154573242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:27.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1894065601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:18:28 compute-0 ceph-mon[74357]: pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:29.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:29.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:30 compute-0 ceph-mon[74357]: pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:30 compute-0 nova_compute[247671]: 2026-01-27 09:18:30.735 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:31.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:31.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:31 compute-0 nova_compute[247671]: 2026-01-27 09:18:31.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:18:31 compute-0 ceph-mon[74357]: pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:34 compute-0 ceph-mon[74357]: pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:34 compute-0 sudo[282569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:34 compute-0 sudo[282569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:34 compute-0 sudo[282569]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:34 compute-0 sudo[282594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:34 compute-0 sudo[282594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:34 compute-0 sudo[282594]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:35 compute-0 ceph-mon[74357]: pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:37.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:37.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:38 compute-0 ceph-mon[74357]: pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:39.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:39 compute-0 podman[282621]: 2026-01-27 09:18:39.261151204 +0000 UTC m=+0.080201447 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 09:18:39 compute-0 ceph-mon[74357]: pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:41.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:41.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:41 compute-0 ceph-mon[74357]: pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:43.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:43.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:43 compute-0 sudo[282650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:43 compute-0 sudo[282650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:43 compute-0 sudo[282650]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:43 compute-0 sudo[282675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:18:43 compute-0 sudo[282675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:43 compute-0 sudo[282675]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:43 compute-0 sudo[282700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:43 compute-0 sudo[282700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:43 compute-0 sudo[282700]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:43 compute-0 sudo[282725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:18:43 compute-0 sudo[282725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:44 compute-0 ceph-mon[74357]: pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:44 compute-0 sudo[282725]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:18:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:18:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:18:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:18:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:18:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:18:44 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 870bbdf0-43fc-454a-b763-83529879d796 does not exist
Jan 27 09:18:44 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 79506e98-37e2-4f81-aa7f-7bc1764bcb90 does not exist
Jan 27 09:18:44 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 6a0c248f-2ce5-4721-b34a-f27c35e1b8ce does not exist
Jan 27 09:18:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:18:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:18:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:18:44 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:18:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:18:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:18:44 compute-0 sudo[282781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:44 compute-0 sudo[282781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:44 compute-0 sudo[282781]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:44 compute-0 sudo[282806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:18:44 compute-0 sudo[282806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:44 compute-0 sudo[282806]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:44 compute-0 sudo[282831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:44 compute-0 sudo[282831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:44 compute-0 sudo[282831]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:44 compute-0 sudo[282856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:18:44 compute-0 sudo[282856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.80546617 +0000 UTC m=+0.038928717 container create d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wilbur, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:44 compute-0 systemd[1]: Started libpod-conmon-d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0.scope.
Jan 27 09:18:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.881278246 +0000 UTC m=+0.114740823 container init d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.788126804 +0000 UTC m=+0.021589391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.888900824 +0000 UTC m=+0.122363381 container start d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wilbur, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.89169186 +0000 UTC m=+0.125154427 container attach d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 27 09:18:44 compute-0 hardcore_wilbur[282940]: 167 167
Jan 27 09:18:44 compute-0 systemd[1]: libpod-d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0.scope: Deactivated successfully.
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.894867658 +0000 UTC m=+0.128330275 container died d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-88b2d0327c48319ea05b8d33f990d650eb7a05b76ac597e65198f3c16c8883e4-merged.mount: Deactivated successfully.
Jan 27 09:18:44 compute-0 podman[282924]: 2026-01-27 09:18:44.930809261 +0000 UTC m=+0.164271838 container remove d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 27 09:18:44 compute-0 systemd[1]: libpod-conmon-d9a51a6089fc1110ebd8d83f46e8c4a010d50b83b60cf10f671470788febf1d0.scope: Deactivated successfully.
Jan 27 09:18:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:18:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:18:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:18:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:18:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:18:45 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:18:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:45 compute-0 podman[282961]: 2026-01-27 09:18:45.094443993 +0000 UTC m=+0.039267947 container create d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 27 09:18:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:45 compute-0 systemd[1]: Started libpod-conmon-d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a.scope.
Jan 27 09:18:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5ae9c2467e77bff860bfaabbb75fb64fe47557469a99305b7c8e592de4fc25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5ae9c2467e77bff860bfaabbb75fb64fe47557469a99305b7c8e592de4fc25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5ae9c2467e77bff860bfaabbb75fb64fe47557469a99305b7c8e592de4fc25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5ae9c2467e77bff860bfaabbb75fb64fe47557469a99305b7c8e592de4fc25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5ae9c2467e77bff860bfaabbb75fb64fe47557469a99305b7c8e592de4fc25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:45 compute-0 podman[282961]: 2026-01-27 09:18:45.169069855 +0000 UTC m=+0.113893739 container init d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:45 compute-0 podman[282961]: 2026-01-27 09:18:45.07793013 +0000 UTC m=+0.022754034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:18:45 compute-0 podman[282961]: 2026-01-27 09:18:45.178418912 +0000 UTC m=+0.123242796 container start d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:45 compute-0 podman[282961]: 2026-01-27 09:18:45.181929898 +0000 UTC m=+0.126753782 container attach d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:45.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:45.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:45 compute-0 amazing_hugle[282978]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:18:45 compute-0 amazing_hugle[282978]: --> relative data size: 1.0
Jan 27 09:18:45 compute-0 amazing_hugle[282978]: --> All data devices are unavailable
Jan 27 09:18:45 compute-0 systemd[1]: libpod-d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a.scope: Deactivated successfully.
Jan 27 09:18:45 compute-0 podman[282961]: 2026-01-27 09:18:45.965091173 +0000 UTC m=+0.909915087 container died d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a5ae9c2467e77bff860bfaabbb75fb64fe47557469a99305b7c8e592de4fc25-merged.mount: Deactivated successfully.
Jan 27 09:18:46 compute-0 podman[282961]: 2026-01-27 09:18:46.020580892 +0000 UTC m=+0.965404776 container remove d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:46 compute-0 systemd[1]: libpod-conmon-d2cab3a3f5b3c8e774b88873a5e4f3b98bc837993d488fa398a21c9f5379ba3a.scope: Deactivated successfully.
Jan 27 09:18:46 compute-0 sudo[282856]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:46 compute-0 ceph-mon[74357]: pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:46 compute-0 sudo[283006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:46 compute-0 sudo[283006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:46 compute-0 sudo[283006]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:46 compute-0 sudo[283031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:18:46 compute-0 sudo[283031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:46 compute-0 sudo[283031]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:46 compute-0 sudo[283056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:46 compute-0 sudo[283056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:46 compute-0 sudo[283056]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:46 compute-0 sudo[283081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:18:46 compute-0 sudo[283081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.571798815 +0000 UTC m=+0.036245003 container create 5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 27 09:18:46 compute-0 systemd[1]: Started libpod-conmon-5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248.scope.
Jan 27 09:18:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.634899334 +0000 UTC m=+0.099345542 container init 5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.641475143 +0000 UTC m=+0.105921331 container start 5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_almeida, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.644464735 +0000 UTC m=+0.108910943 container attach 5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:46 compute-0 mystifying_almeida[283163]: 167 167
Jan 27 09:18:46 compute-0 systemd[1]: libpod-5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248.scope: Deactivated successfully.
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.64682862 +0000 UTC m=+0.111274808 container died 5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.556911218 +0000 UTC m=+0.021357426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa25b6d0070fc62883c4282406f6ebd3a5452182e11e8c1b79fbb91559138557-merged.mount: Deactivated successfully.
Jan 27 09:18:46 compute-0 podman[283145]: 2026-01-27 09:18:46.679954667 +0000 UTC m=+0.144400855 container remove 5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_almeida, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:18:46 compute-0 systemd[1]: libpod-conmon-5ca1a49f8c87e7c41802d1d4b4d9ce0e1608691ba353d3c6d6b16686380c5248.scope: Deactivated successfully.
Jan 27 09:18:46 compute-0 podman[283186]: 2026-01-27 09:18:46.816269309 +0000 UTC m=+0.033808266 container create 26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:18:46 compute-0 systemd[1]: Started libpod-conmon-26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a.scope.
Jan 27 09:18:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283719c6541e531e9c599ded2b2463956d977ced388f85a3cf957228ac7fe589/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283719c6541e531e9c599ded2b2463956d977ced388f85a3cf957228ac7fe589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283719c6541e531e9c599ded2b2463956d977ced388f85a3cf957228ac7fe589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283719c6541e531e9c599ded2b2463956d977ced388f85a3cf957228ac7fe589/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:46 compute-0 podman[283186]: 2026-01-27 09:18:46.802931055 +0000 UTC m=+0.020470042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:18:46 compute-0 podman[283186]: 2026-01-27 09:18:46.914005835 +0000 UTC m=+0.131544822 container init 26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:46 compute-0 podman[283186]: 2026-01-27 09:18:46.921058579 +0000 UTC m=+0.138597536 container start 26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:18:46 compute-0 podman[283186]: 2026-01-27 09:18:46.925422328 +0000 UTC m=+0.142961325 container attach 26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:47.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]: {
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:     "0": [
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:         {
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "devices": [
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "/dev/loop3"
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             ],
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "lv_name": "ceph_lv0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "lv_size": "7511998464",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "name": "ceph_lv0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "tags": {
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.cluster_name": "ceph",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.crush_device_class": "",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.encrypted": "0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.osd_id": "0",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.type": "block",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:                 "ceph.vdo": "0"
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             },
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "type": "block",
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:             "vg_name": "ceph_vg0"
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:         }
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]:     ]
Jan 27 09:18:47 compute-0 reverent_mclaren[283203]: }
Jan 27 09:18:47 compute-0 podman[283186]: 2026-01-27 09:18:47.676736171 +0000 UTC m=+0.894275138 container died 26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 27 09:18:47 compute-0 systemd[1]: libpod-26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a.scope: Deactivated successfully.
Jan 27 09:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-283719c6541e531e9c599ded2b2463956d977ced388f85a3cf957228ac7fe589-merged.mount: Deactivated successfully.
Jan 27 09:18:47 compute-0 podman[283186]: 2026-01-27 09:18:47.728483838 +0000 UTC m=+0.946022815 container remove 26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:18:47 compute-0 systemd[1]: libpod-conmon-26c5e6be8ad7ee432f95e898042077969ff66f5ea35408da277e11ec3f1d3c6a.scope: Deactivated successfully.
Jan 27 09:18:47 compute-0 sudo[283081]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:47 compute-0 sudo[283225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:47 compute-0 sudo[283225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:47 compute-0 sudo[283225]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:47 compute-0 sudo[283250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:18:47 compute-0 sudo[283250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:47 compute-0 sudo[283250]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:47 compute-0 sudo[283275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:47 compute-0 sudo[283275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:47 compute-0 sudo[283275]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:47 compute-0 sudo[283300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:18:47 compute-0 sudo[283300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:48 compute-0 ceph-mon[74357]: pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.275140546 +0000 UTC m=+0.034896456 container create a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:18:48 compute-0 systemd[1]: Started libpod-conmon-a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6.scope.
Jan 27 09:18:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.348786033 +0000 UTC m=+0.108541943 container init a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.354251163 +0000 UTC m=+0.114007073 container start a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.259980262 +0000 UTC m=+0.019736192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.357198643 +0000 UTC m=+0.116954573 container attach a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:48 compute-0 cool_murdock[283381]: 167 167
Jan 27 09:18:48 compute-0 systemd[1]: libpod-a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6.scope: Deactivated successfully.
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.360408422 +0000 UTC m=+0.120164352 container died a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-071a4c2f0cef72bdb8f0d75c1fb7cfef5cf2ed5c0dbfe9bc5bfdb32483b506f8-merged.mount: Deactivated successfully.
Jan 27 09:18:48 compute-0 podman[283364]: 2026-01-27 09:18:48.394237508 +0000 UTC m=+0.153993428 container remove a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 09:18:48 compute-0 systemd[1]: libpod-conmon-a8d8384f7d4b11cc223d8353353b68b3f5d661cc015e9510ad427ad176b0bca6.scope: Deactivated successfully.
Jan 27 09:18:48 compute-0 podman[283406]: 2026-01-27 09:18:48.542141757 +0000 UTC m=+0.036902211 container create 2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:18:48 compute-0 systemd[1]: Started libpod-conmon-2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54.scope.
Jan 27 09:18:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b0d94d2e92cc8c52b30f12130c8d87b199bf995eecd667024de1bb4926059ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b0d94d2e92cc8c52b30f12130c8d87b199bf995eecd667024de1bb4926059ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b0d94d2e92cc8c52b30f12130c8d87b199bf995eecd667024de1bb4926059ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b0d94d2e92cc8c52b30f12130c8d87b199bf995eecd667024de1bb4926059ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:48 compute-0 podman[283406]: 2026-01-27 09:18:48.614879709 +0000 UTC m=+0.109640173 container init 2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 27 09:18:48 compute-0 podman[283406]: 2026-01-27 09:18:48.526182481 +0000 UTC m=+0.020942955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:18:48 compute-0 podman[283406]: 2026-01-27 09:18:48.623023432 +0000 UTC m=+0.117783886 container start 2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:48 compute-0 podman[283406]: 2026-01-27 09:18:48.625994183 +0000 UTC m=+0.120754657 container attach 2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 27 09:18:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:49.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:49 compute-0 ceph-mon[74357]: pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]: {
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:         "osd_id": 0,
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:         "type": "bluestore"
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]:     }
Jan 27 09:18:49 compute-0 peaceful_banzai[283424]: }
Jan 27 09:18:49 compute-0 systemd[1]: libpod-2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54.scope: Deactivated successfully.
Jan 27 09:18:49 compute-0 podman[283406]: 2026-01-27 09:18:49.46109502 +0000 UTC m=+0.955855474 container died 2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 27 09:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b0d94d2e92cc8c52b30f12130c8d87b199bf995eecd667024de1bb4926059ac-merged.mount: Deactivated successfully.
Jan 27 09:18:49 compute-0 podman[283406]: 2026-01-27 09:18:49.534217203 +0000 UTC m=+1.028977657 container remove 2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:49 compute-0 systemd[1]: libpod-conmon-2db328cf96d3d20affa47a699a4ba014ab602542c8b5b728c6634fb53ba4fd54.scope: Deactivated successfully.
Jan 27 09:18:49 compute-0 sudo[283300]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:18:49 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:18:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:18:49 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:18:49 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 514231d3-6dd6-44ae-9c06-c4030c751809 does not exist
Jan 27 09:18:49 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d69cdab4-c3a9-467c-8188-b773659ec38a does not exist
Jan 27 09:18:49 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 18b8a095-cad4-410f-ab65-879a84c9de5f does not exist
Jan 27 09:18:49 compute-0 sudo[283456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:49 compute-0 sudo[283456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:49 compute-0 sudo[283456]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:49 compute-0 sudo[283481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:18:49 compute-0 sudo[283481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:49 compute-0 sudo[283481]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:18:50 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:18:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:51 compute-0 ceph-mon[74357]: pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:53.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:54 compute-0 ceph-mon[74357]: pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:54 compute-0 podman[283508]: 2026-01-27 09:18:54.234032813 +0000 UTC m=+0.050749000 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:18:54.259 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:18:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:18:54.259 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:18:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:18:54.259 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:18:54 compute-0 sudo[283528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:54 compute-0 sudo[283528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:54 compute-0 sudo[283528]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:54 compute-0 sudo[283553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:18:54 compute-0 sudo[283553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:18:54 compute-0 sudo[283553]: pam_unix(sudo:session): session closed for user root
Jan 27 09:18:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:55.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:56 compute-0 ceph-mon[74357]: pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:18:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:18:58 compute-0 ceph-mon[74357]: pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:18:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:18:59.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:18:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:18:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:18:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:18:59.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:00 compute-0 ceph-mon[74357]: pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1183349929' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:19:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1183349929' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:19:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:01.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:01 compute-0 ceph-mon[74357]: pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:03.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:03.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:04 compute-0 ceph-mon[74357]: pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:05.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:05.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:05 compute-0 ceph-mon[74357]: pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:07.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:07.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:08 compute-0 ceph-mon[74357]: pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:09.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:09.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:09 compute-0 ceph-mon[74357]: pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:10 compute-0 podman[283586]: 2026-01-27 09:19:10.253620473 +0000 UTC m=+0.072211679 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:19:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:11.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:11.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:11 compute-0 ceph-mon[74357]: pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:13.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:13.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:13 compute-0 ceph-mon[74357]: pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:14 compute-0 sudo[283615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:14 compute-0 sudo[283615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:14 compute-0 sudo[283615]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:14 compute-0 sudo[283640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:14 compute-0 sudo[283640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:14 compute-0 sudo[283640]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:19:15
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', '.mgr', 'volumes', 'default.rgw.meta']
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:19:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:15.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:15.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:19:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:19:15 compute-0 nova_compute[247671]: 2026-01-27 09:19:15.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:16 compute-0 ceph-mon[74357]: pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:17.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:17.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:17 compute-0 nova_compute[247671]: 2026-01-27 09:19:17.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:17 compute-0 ceph-mon[74357]: pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:18 compute-0 nova_compute[247671]: 2026-01-27 09:19:18.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:18 compute-0 nova_compute[247671]: 2026-01-27 09:19:18.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:18 compute-0 nova_compute[247671]: 2026-01-27 09:19:18.421 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:19:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:19.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:19.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:19 compute-0 nova_compute[247671]: 2026-01-27 09:19:19.417 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:20 compute-0 ceph-mon[74357]: pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:21.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:21 compute-0 nova_compute[247671]: 2026-01-27 09:19:21.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:21 compute-0 nova_compute[247671]: 2026-01-27 09:19:21.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:19:21 compute-0 nova_compute[247671]: 2026-01-27 09:19:21.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:19:21 compute-0 nova_compute[247671]: 2026-01-27 09:19:21.437 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.494950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505561495008, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 997, "num_deletes": 250, "total_data_size": 1591586, "memory_usage": 1619160, "flush_reason": "Manual Compaction"}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505561554786, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 947291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36649, "largest_seqno": 37645, "table_properties": {"data_size": 943449, "index_size": 1494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10164, "raw_average_key_size": 20, "raw_value_size": 935229, "raw_average_value_size": 1904, "num_data_blocks": 67, "num_entries": 491, "num_filter_entries": 491, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769505467, "oldest_key_time": 1769505467, "file_creation_time": 1769505561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 59907 microseconds, and 3539 cpu microseconds.
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.554864) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 947291 bytes OK
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.554936) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.625017) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.625062) EVENT_LOG_v1 {"time_micros": 1769505561625052, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.625089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1587033, prev total WAL file size 1603008, number of live WAL files 2.
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.625875) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(925KB)], [80(11MB)]
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505561625964, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12564554, "oldest_snapshot_seqno": -1}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6025 keys, 9338013 bytes, temperature: kUnknown
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505561789990, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9338013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9299156, "index_size": 22683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 154841, "raw_average_key_size": 25, "raw_value_size": 9191815, "raw_average_value_size": 1525, "num_data_blocks": 917, "num_entries": 6025, "num_filter_entries": 6025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769505561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.790446) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9338013 bytes
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.836936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.6 rd, 56.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(23.1) write-amplify(9.9) OK, records in: 6497, records dropped: 472 output_compression: NoCompression
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.836976) EVENT_LOG_v1 {"time_micros": 1769505561836962, "job": 46, "event": "compaction_finished", "compaction_time_micros": 164085, "compaction_time_cpu_micros": 23036, "output_level": 6, "num_output_files": 1, "total_output_size": 9338013, "num_input_records": 6497, "num_output_records": 6025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505561837590, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505561839976, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.625804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.840049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.840054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.840056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.840057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:21 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:19:21.840059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:21 compute-0 ceph-mon[74357]: pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:22 compute-0 nova_compute[247671]: 2026-01-27 09:19:22.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:22 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:23 compute-0 nova_compute[247671]: 2026-01-27 09:19:23.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:23 compute-0 nova_compute[247671]: 2026-01-27 09:19:23.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 09:19:23 compute-0 ceph-mon[74357]: pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.436 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.463 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.464 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.464 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.464 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:19:24 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:19:24 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687790959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:24 compute-0 nova_compute[247671]: 2026-01-27 09:19:24.919 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:19:24 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.073 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.074 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5137MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.075 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.075 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:19:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3687790959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.148 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.149 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.149 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.181 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:19:25 compute-0 podman[283692]: 2026-01-27 09:19:25.231681503 +0000 UTC m=+0.050148714 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 09:19:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:25.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:25 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:19:25 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787196815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.626 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.631 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.647 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.648 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.648 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.649 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.649 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 09:19:25 compute-0 nova_compute[247671]: 2026-01-27 09:19:25.662 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 09:19:26 compute-0 ceph-mon[74357]: pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1042388648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3787196815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/814408538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:26 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:27.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2959754964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:27 compute-0 ceph-mon[74357]: pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:28 compute-0 nova_compute[247671]: 2026-01-27 09:19:28.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3753129006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:19:28 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:29.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:29 compute-0 nova_compute[247671]: 2026-01-27 09:19:29.444 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:29 compute-0 ceph-mon[74357]: pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:19:30 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3958 syncs, 3.17 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2506 writes, 6110 keys, 2506 commit groups, 1.0 writes per commit group, ingest: 3.25 MB, 0.01 MB/s
                                           Interval WAL: 2506 writes, 1170 syncs, 2.14 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:19:30 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:31.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:31 compute-0 ceph-mon[74357]: pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:31.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:32 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:33 compute-0 nova_compute[247671]: 2026-01-27 09:19:33.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:34 compute-0 ceph-mon[74357]: pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:34 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:35 compute-0 sudo[283739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:35 compute-0 sudo[283739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:35 compute-0 sudo[283739]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:35 compute-0 sudo[283764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:35 compute-0 sudo[283764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:35 compute-0 sudo[283764]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:36 compute-0 ceph-mon[74357]: pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:36 compute-0 ceph-mgr[74650]: [devicehealth INFO root] Check health
Jan 27 09:19:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:36 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:37.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:37.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:37 compute-0 ceph-mon[74357]: pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:38 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:39.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:40 compute-0 ceph-mon[74357]: pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:40 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:41 compute-0 podman[283792]: 2026-01-27 09:19:41.259145028 +0000 UTC m=+0.071242772 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:19:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:41.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:41 compute-0 ceph-mon[74357]: pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:42 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:43.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:43 compute-0 nova_compute[247671]: 2026-01-27 09:19:43.384 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:19:44 compute-0 ceph-mon[74357]: pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:44 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:45.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:46 compute-0 ceph-mon[74357]: pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:46 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:47 compute-0 ceph-mon[74357]: pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:48 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:49.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:49.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:50 compute-0 sudo[283823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:50 compute-0 sudo[283823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:50 compute-0 sudo[283823]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:50 compute-0 ceph-mon[74357]: pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:50 compute-0 sudo[283848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:19:50 compute-0 sudo[283848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:50 compute-0 sudo[283848]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:50 compute-0 sudo[283873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:50 compute-0 sudo[283873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:50 compute-0 sudo[283873]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:50 compute-0 sudo[283898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 27 09:19:50 compute-0 sudo[283898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:50 compute-0 sudo[283898]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:19:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:50 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:19:50 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:50 compute-0 sudo[283944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:50 compute-0 sudo[283944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:50 compute-0 sudo[283944]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:50 compute-0 sudo[283969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:19:50 compute-0 sudo[283969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:50 compute-0 sudo[283969]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:50 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:51 compute-0 sudo[283994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:51 compute-0 sudo[283994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:51 compute-0 sudo[283994]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:51 compute-0 sudo[284019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:19:51 compute-0 sudo[284019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:51.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:51.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:51 compute-0 sudo[284019]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:19:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:19:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:19:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev d8236e93-52b8-492e-ae1f-7371e5ac73dd does not exist
Jan 27 09:19:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 4dae2df4-e5d7-4fc9-968f-cd28741988e8 does not exist
Jan 27 09:19:51 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 755ee83f-d5da-4835-ae77-79dfbc03d304 does not exist
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:19:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:19:51 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:19:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:19:51 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:19:51 compute-0 sudo[284075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:51 compute-0 sudo[284075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:51 compute-0 sudo[284075]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:51 compute-0 sudo[284100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:19:51 compute-0 sudo[284100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:51 compute-0 sudo[284100]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:51 compute-0 ceph-mon[74357]: pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:19:51 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:19:51 compute-0 sudo[284125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:51 compute-0 sudo[284125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:51 compute-0 sudo[284125]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:51 compute-0 sudo[284150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:19:51 compute-0 sudo[284150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.217692206 +0000 UTC m=+0.039353609 container create 1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 27 09:19:52 compute-0 systemd[1]: Started libpod-conmon-1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809.scope.
Jan 27 09:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.198554772 +0000 UTC m=+0.020216195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.296058721 +0000 UTC m=+0.117720154 container init 1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.303814834 +0000 UTC m=+0.125476237 container start 1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 09:19:52 compute-0 systemd[1]: libpod-1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809.scope: Deactivated successfully.
Jan 27 09:19:52 compute-0 upbeat_euler[284232]: 167 167
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.310522798 +0000 UTC m=+0.132184231 container attach 1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:19:52 compute-0 conmon[284232]: conmon 1bbe4549cccca2c84d5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809.scope/container/memory.events
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.311870935 +0000 UTC m=+0.133532338 container died 1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-18c8eb4502d536e666411418e2a805f2b9db1f3dad850fb6d878a667f7ded995-merged.mount: Deactivated successfully.
Jan 27 09:19:52 compute-0 podman[284215]: 2026-01-27 09:19:52.382711624 +0000 UTC m=+0.204373017 container remove 1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 27 09:19:52 compute-0 systemd[1]: libpod-conmon-1bbe4549cccca2c84d5cb3e10b441b81402a2d713d9e2baceed607b805235809.scope: Deactivated successfully.
Jan 27 09:19:52 compute-0 podman[284255]: 2026-01-27 09:19:52.564636996 +0000 UTC m=+0.061974458 container create a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:19:52 compute-0 systemd[1]: Started libpod-conmon-a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c.scope.
Jan 27 09:19:52 compute-0 podman[284255]: 2026-01-27 09:19:52.524390664 +0000 UTC m=+0.021728156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b54b7c1e62ae53bd20b322adf650cebc6b9e088c997a7d7c16902a67c1fc30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b54b7c1e62ae53bd20b322adf650cebc6b9e088c997a7d7c16902a67c1fc30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b54b7c1e62ae53bd20b322adf650cebc6b9e088c997a7d7c16902a67c1fc30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b54b7c1e62ae53bd20b322adf650cebc6b9e088c997a7d7c16902a67c1fc30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b54b7c1e62ae53bd20b322adf650cebc6b9e088c997a7d7c16902a67c1fc30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:52 compute-0 podman[284255]: 2026-01-27 09:19:52.645327866 +0000 UTC m=+0.142665348 container init a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:52 compute-0 podman[284255]: 2026-01-27 09:19:52.651288219 +0000 UTC m=+0.148625681 container start a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:19:52 compute-0 podman[284255]: 2026-01-27 09:19:52.654210699 +0000 UTC m=+0.151548161 container attach a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:19:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:19:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:19:52 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:19:52 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:53.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:53 compute-0 eloquent_panini[284273]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:19:53 compute-0 eloquent_panini[284273]: --> relative data size: 1.0
Jan 27 09:19:53 compute-0 eloquent_panini[284273]: --> All data devices are unavailable
Jan 27 09:19:53 compute-0 systemd[1]: libpod-a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c.scope: Deactivated successfully.
Jan 27 09:19:53 compute-0 podman[284255]: 2026-01-27 09:19:53.451819429 +0000 UTC m=+0.949156891 container died a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b54b7c1e62ae53bd20b322adf650cebc6b9e088c997a7d7c16902a67c1fc30-merged.mount: Deactivated successfully.
Jan 27 09:19:53 compute-0 podman[284255]: 2026-01-27 09:19:53.505733935 +0000 UTC m=+1.003071387 container remove a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:53 compute-0 systemd[1]: libpod-conmon-a2b3461777b7d0e07cbdc027b37cef0609a374a2926fb08fd21249aebcf9b96c.scope: Deactivated successfully.
Jan 27 09:19:53 compute-0 sudo[284150]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:53 compute-0 sudo[284301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:53 compute-0 sudo[284301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:53 compute-0 sudo[284301]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:53 compute-0 sudo[284326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:19:53 compute-0 sudo[284326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:53 compute-0 sudo[284326]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:53 compute-0 sudo[284351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:53 compute-0 sudo[284351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:53 compute-0 sudo[284351]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:53 compute-0 sudo[284376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:19:53 compute-0 sudo[284376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:54 compute-0 ceph-mon[74357]: pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.028662754 +0000 UTC m=+0.033591520 container create 40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:19:54 compute-0 systemd[1]: Started libpod-conmon-40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71.scope.
Jan 27 09:19:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.09530904 +0000 UTC m=+0.100237826 container init 40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.101259292 +0000 UTC m=+0.106188058 container start 40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_engelbart, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.104491331 +0000 UTC m=+0.109420117 container attach 40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:19:54 compute-0 strange_engelbart[284454]: 167 167
Jan 27 09:19:54 compute-0 systemd[1]: libpod-40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71.scope: Deactivated successfully.
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.106720272 +0000 UTC m=+0.111649048 container died 40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.014492747 +0000 UTC m=+0.019421543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cbe258e37e37a14da1d43e8997f897c6870985067783e516f52a44257504527-merged.mount: Deactivated successfully.
Jan 27 09:19:54 compute-0 podman[284438]: 2026-01-27 09:19:54.136145477 +0000 UTC m=+0.141074243 container remove 40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 27 09:19:54 compute-0 systemd[1]: libpod-conmon-40ffc96d47afc2270896c5ea312de0305a4de0da622b76b057eeb0d583408a71.scope: Deactivated successfully.
Jan 27 09:19:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:19:54.259 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:19:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:19:54.261 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:19:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:19:54.262 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:19:54 compute-0 podman[284478]: 2026-01-27 09:19:54.278168076 +0000 UTC m=+0.036159911 container create 1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ardinghelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 27 09:19:54 compute-0 systemd[1]: Started libpod-conmon-1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114.scope.
Jan 27 09:19:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452aa49748cbf8602b80d86d2fa6d411d7931937853cccf3b781a4d2b580f9f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452aa49748cbf8602b80d86d2fa6d411d7931937853cccf3b781a4d2b580f9f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452aa49748cbf8602b80d86d2fa6d411d7931937853cccf3b781a4d2b580f9f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452aa49748cbf8602b80d86d2fa6d411d7931937853cccf3b781a4d2b580f9f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:54 compute-0 podman[284478]: 2026-01-27 09:19:54.353297924 +0000 UTC m=+0.111289779 container init 1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 27 09:19:54 compute-0 podman[284478]: 2026-01-27 09:19:54.262855468 +0000 UTC m=+0.020847323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:19:54 compute-0 podman[284478]: 2026-01-27 09:19:54.36446526 +0000 UTC m=+0.122457095 container start 1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:19:54 compute-0 podman[284478]: 2026-01-27 09:19:54.367208495 +0000 UTC m=+0.125200350 container attach 1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:19:54 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]: {
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:     "0": [
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:         {
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "devices": [
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "/dev/loop3"
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             ],
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "lv_name": "ceph_lv0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "lv_size": "7511998464",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "name": "ceph_lv0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "tags": {
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.cluster_name": "ceph",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.crush_device_class": "",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.encrypted": "0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.osd_id": "0",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.type": "block",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:                 "ceph.vdo": "0"
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             },
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "type": "block",
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:             "vg_name": "ceph_vg0"
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:         }
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]:     ]
Jan 27 09:19:55 compute-0 busy_ardinghelli[284494]: }
Jan 27 09:19:55 compute-0 systemd[1]: libpod-1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114.scope: Deactivated successfully.
Jan 27 09:19:55 compute-0 podman[284478]: 2026-01-27 09:19:55.137751724 +0000 UTC m=+0.895743559 container died 1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 09:19:55 compute-0 sudo[284514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:55 compute-0 sudo[284514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:55 compute-0 sudo[284514]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:55 compute-0 sudo[284546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:55 compute-0 sudo[284546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:55 compute-0 sudo[284546]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-452aa49748cbf8602b80d86d2fa6d411d7931937853cccf3b781a4d2b580f9f5-merged.mount: Deactivated successfully.
Jan 27 09:19:55 compute-0 podman[284478]: 2026-01-27 09:19:55.687511417 +0000 UTC m=+1.445503252 container remove 1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:19:55 compute-0 systemd[1]: libpod-conmon-1e813bccfed0329ec3649075179fb12bd87d18f4b706d3cfa1c4e141aa982114.scope: Deactivated successfully.
Jan 27 09:19:55 compute-0 sudo[284376]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:55 compute-0 podman[284539]: 2026-01-27 09:19:55.791650399 +0000 UTC m=+0.516827783 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:19:55 compute-0 sudo[284580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:55 compute-0 sudo[284580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:55 compute-0 sudo[284580]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:55 compute-0 sudo[284609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:19:55 compute-0 sudo[284609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:55 compute-0 sudo[284609]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:55 compute-0 sudo[284635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:55 compute-0 sudo[284635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:55 compute-0 sudo[284635]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:55 compute-0 sudo[284660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:19:55 compute-0 sudo[284660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:56 compute-0 ceph-mon[74357]: pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:56 compute-0 podman[284726]: 2026-01-27 09:19:56.352675592 +0000 UTC m=+0.084806684 container create 6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nash, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:56 compute-0 podman[284726]: 2026-01-27 09:19:56.290142989 +0000 UTC m=+0.022274111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:19:56 compute-0 systemd[1]: Started libpod-conmon-6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec.scope.
Jan 27 09:19:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:19:56 compute-0 podman[284726]: 2026-01-27 09:19:56.533879833 +0000 UTC m=+0.266010945 container init 6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nash, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:19:56 compute-0 podman[284726]: 2026-01-27 09:19:56.540905446 +0000 UTC m=+0.273036538 container start 6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 27 09:19:56 compute-0 jovial_nash[284742]: 167 167
Jan 27 09:19:56 compute-0 systemd[1]: libpod-6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec.scope: Deactivated successfully.
Jan 27 09:19:56 compute-0 podman[284726]: 2026-01-27 09:19:56.605119594 +0000 UTC m=+0.337250686 container attach 6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nash, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:19:56 compute-0 podman[284726]: 2026-01-27 09:19:56.605654469 +0000 UTC m=+0.337785561 container died 6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nash, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-22a0c32bdd0d323405ae1ac2876085cec4270466a640aed2d1c29d1ba2be7eb3-merged.mount: Deactivated successfully.
Jan 27 09:19:56 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:57 compute-0 podman[284726]: 2026-01-27 09:19:57.123498428 +0000 UTC m=+0.855629530 container remove 6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:19:57 compute-0 systemd[1]: libpod-conmon-6e511d2094ba55702fc942555d1c0f2f31548fd747cffc4add9c142dac42ecec.scope: Deactivated successfully.
Jan 27 09:19:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:57 compute-0 podman[284766]: 2026-01-27 09:19:57.344661534 +0000 UTC m=+0.110589600 container create ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 27 09:19:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:57 compute-0 podman[284766]: 2026-01-27 09:19:57.25653357 +0000 UTC m=+0.022461616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:19:57 compute-0 systemd[1]: Started libpod-conmon-ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1.scope.
Jan 27 09:19:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a9781274bd38dc2aa7a539ab35a0de5721576429df4fc5750619c7097acc77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a9781274bd38dc2aa7a539ab35a0de5721576429df4fc5750619c7097acc77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a9781274bd38dc2aa7a539ab35a0de5721576429df4fc5750619c7097acc77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a9781274bd38dc2aa7a539ab35a0de5721576429df4fc5750619c7097acc77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:57 compute-0 ceph-mon[74357]: pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:57 compute-0 podman[284766]: 2026-01-27 09:19:57.461471602 +0000 UTC m=+0.227399628 container init ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:57 compute-0 podman[284766]: 2026-01-27 09:19:57.46869184 +0000 UTC m=+0.234619866 container start ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:19:57 compute-0 podman[284766]: 2026-01-27 09:19:57.60308601 +0000 UTC m=+0.369014036 container attach ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:58 compute-0 eager_carson[284781]: {
Jan 27 09:19:58 compute-0 eager_carson[284781]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:19:58 compute-0 eager_carson[284781]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:19:58 compute-0 eager_carson[284781]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:19:58 compute-0 eager_carson[284781]:         "osd_id": 0,
Jan 27 09:19:58 compute-0 eager_carson[284781]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:19:58 compute-0 eager_carson[284781]:         "type": "bluestore"
Jan 27 09:19:58 compute-0 eager_carson[284781]:     }
Jan 27 09:19:58 compute-0 eager_carson[284781]: }
Jan 27 09:19:58 compute-0 systemd[1]: libpod-ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1.scope: Deactivated successfully.
Jan 27 09:19:58 compute-0 podman[284766]: 2026-01-27 09:19:58.288382355 +0000 UTC m=+1.054310381 container died ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 27 09:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a9781274bd38dc2aa7a539ab35a0de5721576429df4fc5750619c7097acc77-merged.mount: Deactivated successfully.
Jan 27 09:19:58 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:59 compute-0 podman[284766]: 2026-01-27 09:19:59.104608805 +0000 UTC m=+1.870536851 container remove ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 09:19:59 compute-0 systemd[1]: libpod-conmon-ea88dfe85058bb6e9825793df059e3b108bb0ab70bc9a017e03ac5adb49d97b1.scope: Deactivated successfully.
Jan 27 09:19:59 compute-0 sudo[284660]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:19:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:19:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:19:59.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:19:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:19:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:19:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:19:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:19:59 compute-0 ceph-mon[74357]: pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:19:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:19:59 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:19:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev ed0fbe01-014e-4b6c-bcf6-ec59dc2187e5 does not exist
Jan 27 09:19:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f0752497-13aa-494c-b040-7ae942209ee9 does not exist
Jan 27 09:19:59 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 163711cd-7b05-418c-ab82-cfd9ad77f941 does not exist
Jan 27 09:19:59 compute-0 sudo[284817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:19:59 compute-0 sudo[284817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:59 compute-0 sudo[284817]: pam_unix(sudo:session): session closed for user root
Jan 27 09:19:59 compute-0 sudo[284842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:19:59 compute-0 sudo[284842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:19:59 compute-0 sudo[284842]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:00 compute-0 ceph-mon[74357]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 27 09:20:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2639145733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:20:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/2639145733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:20:00 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:20:00 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:20:00 compute-0 ceph-mon[74357]: overall HEALTH_OK
Jan 27 09:20:00 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:01 compute-0 ceph-mon[74357]: pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:02 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:03.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:03.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:03 compute-0 ceph-mon[74357]: pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:04 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:05.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:05.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:06 compute-0 ceph-mon[74357]: pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:06 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:07.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:07.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:07 compute-0 ceph-mon[74357]: pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:08 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:09.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:09 compute-0 ceph-mon[74357]: pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:10 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:11.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:11 compute-0 nova_compute[247671]: 2026-01-27 09:20:11.346 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:11.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:12 compute-0 podman[284873]: 2026-01-27 09:20:12.270232327 +0000 UTC m=+0.079208211 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:20:12 compute-0 ceph-mon[74357]: pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:12 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:13 compute-0 ceph-mon[74357]: pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:13.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:14 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:20:15
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'images']
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:20:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:15.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:20:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:20:15 compute-0 sudo[284901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:20:15 compute-0 sudo[284901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:20:15 compute-0 sudo[284901]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:15 compute-0 sudo[284926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:20:15 compute-0 sudo[284926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:20:15 compute-0 sudo[284926]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:16 compute-0 ceph-mon[74357]: pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:16 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:17.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:17 compute-0 ceph-mon[74357]: pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:17 compute-0 nova_compute[247671]: 2026-01-27 09:20:17.460 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:17 compute-0 nova_compute[247671]: 2026-01-27 09:20:17.460 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:18 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:19.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:20 compute-0 ceph-mon[74357]: pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:20 compute-0 nova_compute[247671]: 2026-01-27 09:20:20.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:20 compute-0 nova_compute[247671]: 2026-01-27 09:20:20.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:20 compute-0 nova_compute[247671]: 2026-01-27 09:20:20.421 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:20:20 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:22 compute-0 ceph-mon[74357]: pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:22 compute-0 nova_compute[247671]: 2026-01-27 09:20:22.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:22 compute-0 nova_compute[247671]: 2026-01-27 09:20:22.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:20:22 compute-0 nova_compute[247671]: 2026-01-27 09:20:22.422 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:20:22 compute-0 nova_compute[247671]: 2026-01-27 09:20:22.436 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:20:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:23.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:24 compute-0 ceph-mon[74357]: pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:24 compute-0 nova_compute[247671]: 2026-01-27 09:20:24.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:20:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:20:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:25 compute-0 ceph-mon[74357]: pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1826581556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:25.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:26 compute-0 podman[284957]: 2026-01-27 09:20:26.244686629 +0000 UTC m=+0.058575875 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:20:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1862186993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.513 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.513 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.514 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.514 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.514 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:20:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:20:26 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100540566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:26 compute-0 nova_compute[247671]: 2026-01-27 09:20:26.933 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:20:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.074 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.075 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.075 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.075 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:20:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/100540566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:27 compute-0 ceph-mon[74357]: pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:27.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.664 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.665 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.665 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.716 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing inventories for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.780 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating ProviderTree inventory for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.780 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Updating inventory in ProviderTree for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.793 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing aggregate associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.818 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Refreshing trait associations for resource provider 083cbb1c-f2d4-4883-a91d-8697c4453517, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 09:20:27 compute-0 nova_compute[247671]: 2026-01-27 09:20:27.880 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:20:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:20:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294808735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:28 compute-0 nova_compute[247671]: 2026-01-27 09:20:28.283 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:20:28 compute-0 nova_compute[247671]: 2026-01-27 09:20:28.288 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:20:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1294808735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:28 compute-0 nova_compute[247671]: 2026-01-27 09:20:28.430 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:20:28 compute-0 nova_compute[247671]: 2026-01-27 09:20:28.433 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:20:28 compute-0 nova_compute[247671]: 2026-01-27 09:20:28.433 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:20:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/767513792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:29 compute-0 ceph-mon[74357]: pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:29.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3366313307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:20:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:31.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:31 compute-0 ceph-mon[74357]: pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:31 compute-0 nova_compute[247671]: 2026-01-27 09:20:31.434 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:33 compute-0 nova_compute[247671]: 2026-01-27 09:20:33.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:20:34 compute-0 ceph-mon[74357]: pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.109121) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505635109171, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 891, "num_deletes": 251, "total_data_size": 1352679, "memory_usage": 1382048, "flush_reason": "Manual Compaction"}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505635121683, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1327058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37646, "largest_seqno": 38536, "table_properties": {"data_size": 1322650, "index_size": 2060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9842, "raw_average_key_size": 19, "raw_value_size": 1313751, "raw_average_value_size": 2627, "num_data_blocks": 91, "num_entries": 500, "num_filter_entries": 500, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769505561, "oldest_key_time": 1769505561, "file_creation_time": 1769505635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 12611 microseconds, and 3852 cpu microseconds.
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.121734) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1327058 bytes OK
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.121754) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.123685) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.123698) EVENT_LOG_v1 {"time_micros": 1769505635123694, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.123714) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1348423, prev total WAL file size 1348423, number of live WAL files 2.
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.124236) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1295KB)], [83(9119KB)]
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505635124287, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10665071, "oldest_snapshot_seqno": -1}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6010 keys, 8673638 bytes, temperature: kUnknown
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505635185175, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8673638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8635536, "index_size": 21992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 155221, "raw_average_key_size": 25, "raw_value_size": 8529076, "raw_average_value_size": 1419, "num_data_blocks": 882, "num_entries": 6010, "num_filter_entries": 6010, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769502442, "oldest_key_time": 0, "file_creation_time": 1769505635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6831b649-debc-4b07-a687-adb2cf43b3c1", "db_session_id": "9F6VHEUNOOK9VA53XR25", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.185395) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8673638 bytes
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.186812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.9 rd, 142.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(14.6) write-amplify(6.5) OK, records in: 6525, records dropped: 515 output_compression: NoCompression
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.186832) EVENT_LOG_v1 {"time_micros": 1769505635186822, "job": 48, "event": "compaction_finished", "compaction_time_micros": 60962, "compaction_time_cpu_micros": 18826, "output_level": 6, "num_output_files": 1, "total_output_size": 8673638, "num_input_records": 6525, "num_output_records": 6010, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505635187165, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769505635188751, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.124131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.188836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.188842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.188843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.188845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:35 compute-0 ceph-mon[74357]: rocksdb: (Original Log Time 2026/01/27-09:20:35.188846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:35 compute-0 sudo[285026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:20:35 compute-0 sudo[285026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:20:35 compute-0 sudo[285026]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:35 compute-0 sudo[285051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:20:35 compute-0 sudo[285051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:20:35 compute-0 sudo[285051]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:36 compute-0 ceph-mon[74357]: pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:37.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:37.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:38 compute-0 ceph-mon[74357]: pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:39.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:39 compute-0 ceph-mon[74357]: pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:39.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:41.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:42 compute-0 ceph-mon[74357]: pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:43 compute-0 podman[285080]: 2026-01-27 09:20:43.263074377 +0000 UTC m=+0.080819964 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:20:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:43.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:44 compute-0 ceph-mon[74357]: pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:46 compute-0 ceph-mon[74357]: pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:47.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:48 compute-0 ceph-mon[74357]: pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:50 compute-0 ceph-mon[74357]: pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:51.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:20:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:51.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:20:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:51 compute-0 ceph-mon[74357]: pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:53 compute-0 ceph-mon[74357]: pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:53.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:20:54.259 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:20:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:20:54.260 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:20:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:20:54.260 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:20:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:55.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 27 09:20:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:55.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 27 09:20:55 compute-0 sudo[285111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:20:55 compute-0 sudo[285111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:20:55 compute-0 sudo[285111]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:55 compute-0 sudo[285136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:20:55 compute-0 sudo[285136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:20:55 compute-0 sudo[285136]: pam_unix(sudo:session): session closed for user root
Jan 27 09:20:56 compute-0 ceph-mon[74357]: pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:57 compute-0 podman[285162]: 2026-01-27 09:20:57.244824263 +0000 UTC m=+0.053121276 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:20:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:57.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:57 compute-0 ceph-mon[74357]: pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:57.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:20:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:20:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1716524386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:20:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:20:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1716524386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:20:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:20:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:20:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:20:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:20:59.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:20:59 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 27 09:20:59 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 27 09:21:00 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 27 09:21:00 compute-0 sudo[285183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:00 compute-0 sudo[285183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:00 compute-0 sudo[285183]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:00 compute-0 ceph-mon[74357]: pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1716524386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:21:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/1716524386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:21:00 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 27 09:21:00 compute-0 sudo[285208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:21:00 compute-0 sudo[285208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:00 compute-0 sudo[285208]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:00 compute-0 sudo[285233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:00 compute-0 sudo[285233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:00 compute-0 sudo[285233]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:00 compute-0 sudo[285258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:21:00 compute-0 sudo[285258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:00 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 27 09:21:00 compute-0 radosgw[92542]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 27 09:21:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 27 09:21:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 27 09:21:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:00 compute-0 sudo[285258]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 27 09:21:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 27 09:21:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 09:21:00 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 27 09:21:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:00 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:21:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:01.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b2c442b3-bd0b-42ad-aa44-fd9b35d8e840 does not exist
Jan 27 09:21:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 099529c8-33f9-4935-85d6-e855f3ca4c47 does not exist
Jan 27 09:21:01 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f2fcb600-7788-43ab-aced-bd797c0c0b88 does not exist
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:21:01 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:01 compute-0 ceph-mon[74357]: pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:21:01 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:21:01 compute-0 sudo[285315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:01 compute-0 sudo[285315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:01 compute-0 sudo[285315]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:01 compute-0 sudo[285340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:21:01 compute-0 sudo[285340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:01 compute-0 sudo[285340]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:01 compute-0 sudo[285365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:01 compute-0 sudo[285365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:01 compute-0 sudo[285365]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:02 compute-0 sudo[285390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:21:02 compute-0 sudo[285390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.350080376 +0000 UTC m=+0.049278369 container create 6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:21:02 compute-0 systemd[1]: Started libpod-conmon-6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11.scope.
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.323008116 +0000 UTC m=+0.022206129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:21:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.457981431 +0000 UTC m=+0.157179444 container init 6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.464478819 +0000 UTC m=+0.163676812 container start 6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:21:02 compute-0 adoring_chebyshev[285470]: 167 167
Jan 27 09:21:02 compute-0 systemd[1]: libpod-6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11.scope: Deactivated successfully.
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.474836323 +0000 UTC m=+0.174034316 container attach 6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.475310856 +0000 UTC m=+0.174508849 container died 6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6647cefb267d08b683c70f88ecea9eb592676c891113e0d897f7de238e782329-merged.mount: Deactivated successfully.
Jan 27 09:21:02 compute-0 podman[285453]: 2026-01-27 09:21:02.534819235 +0000 UTC m=+0.234017228 container remove 6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:21:02 compute-0 systemd[1]: libpod-conmon-6ae23808d5feb805b9796751d1b1d0bd5e6cd0e180f486268a3070faf789fa11.scope: Deactivated successfully.
Jan 27 09:21:02 compute-0 podman[285495]: 2026-01-27 09:21:02.693565042 +0000 UTC m=+0.038619479 container create 791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:02 compute-0 systemd[1]: Started libpod-conmon-791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65.scope.
Jan 27 09:21:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6415ea9f44ba9ea891a3e3ce9e554b4417ba442a2f01678247d32fcc707db9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6415ea9f44ba9ea891a3e3ce9e554b4417ba442a2f01678247d32fcc707db9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6415ea9f44ba9ea891a3e3ce9e554b4417ba442a2f01678247d32fcc707db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6415ea9f44ba9ea891a3e3ce9e554b4417ba442a2f01678247d32fcc707db9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6415ea9f44ba9ea891a3e3ce9e554b4417ba442a2f01678247d32fcc707db9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:02 compute-0 podman[285495]: 2026-01-27 09:21:02.676219307 +0000 UTC m=+0.021273764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:21:02 compute-0 podman[285495]: 2026-01-27 09:21:02.773066889 +0000 UTC m=+0.118121356 container init 791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 27 09:21:02 compute-0 podman[285495]: 2026-01-27 09:21:02.780155263 +0000 UTC m=+0.125209700 container start 791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_edison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:21:02 compute-0 podman[285495]: 2026-01-27 09:21:02.784895653 +0000 UTC m=+0.129950090 container attach 791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:21:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:21:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:21:02 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:21:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:21:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:03.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:03.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:03 compute-0 hopeful_edison[285511]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:21:03 compute-0 hopeful_edison[285511]: --> relative data size: 1.0
Jan 27 09:21:03 compute-0 hopeful_edison[285511]: --> All data devices are unavailable
Jan 27 09:21:03 compute-0 systemd[1]: libpod-791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65.scope: Deactivated successfully.
Jan 27 09:21:03 compute-0 podman[285495]: 2026-01-27 09:21:03.590575484 +0000 UTC m=+0.935629941 container died 791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_edison, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 09:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-db6415ea9f44ba9ea891a3e3ce9e554b4417ba442a2f01678247d32fcc707db9-merged.mount: Deactivated successfully.
Jan 27 09:21:03 compute-0 podman[285495]: 2026-01-27 09:21:03.737908759 +0000 UTC m=+1.082963196 container remove 791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_edison, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 09:21:03 compute-0 systemd[1]: libpod-conmon-791c55a56ab0d0f0511947b5f8612bfb7ede173fdb6d8892067eb27f90b74c65.scope: Deactivated successfully.
Jan 27 09:21:03 compute-0 sudo[285390]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:03 compute-0 sudo[285538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:03 compute-0 sudo[285538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:03 compute-0 sudo[285538]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:03 compute-0 sudo[285563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:21:03 compute-0 sudo[285563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:03 compute-0 sudo[285563]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:03 compute-0 sudo[285588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:03 compute-0 sudo[285588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:03 compute-0 sudo[285588]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:03 compute-0 ceph-mon[74357]: pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:21:03 compute-0 sudo[285613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:21:03 compute-0 sudo[285613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.298691624 +0000 UTC m=+0.040453208 container create b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:21:04 compute-0 systemd[1]: Started libpod-conmon-b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe.scope.
Jan 27 09:21:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.36207094 +0000 UTC m=+0.103832554 container init b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.368924697 +0000 UTC m=+0.110686281 container start b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.371732754 +0000 UTC m=+0.113494358 container attach b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:04 compute-0 dazzling_zhukovsky[285694]: 167 167
Jan 27 09:21:04 compute-0 systemd[1]: libpod-b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe.scope: Deactivated successfully.
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.27776336 +0000 UTC m=+0.019524964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.373109172 +0000 UTC m=+0.114870756 container died b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0d2e5e538c06a9b09aec530aaf36c85fd1b2764e91532c4981da7718bc6d6e3-merged.mount: Deactivated successfully.
Jan 27 09:21:04 compute-0 podman[285678]: 2026-01-27 09:21:04.429803824 +0000 UTC m=+0.171565408 container remove b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:21:04 compute-0 systemd[1]: libpod-conmon-b1991d50f02c1915c0f059a31f1fefbc70ec12eae2bc2ec729c7f97b972fd6fe.scope: Deactivated successfully.
Jan 27 09:21:04 compute-0 podman[285718]: 2026-01-27 09:21:04.586445034 +0000 UTC m=+0.044470129 container create 2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:21:04 compute-0 systemd[1]: Started libpod-conmon-2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8.scope.
Jan 27 09:21:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab1bc1a49324b4313b7fb03e9b785e8f1439dbbf3b42f1f00b1470baf2eee3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab1bc1a49324b4313b7fb03e9b785e8f1439dbbf3b42f1f00b1470baf2eee3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab1bc1a49324b4313b7fb03e9b785e8f1439dbbf3b42f1f00b1470baf2eee3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab1bc1a49324b4313b7fb03e9b785e8f1439dbbf3b42f1f00b1470baf2eee3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:04 compute-0 podman[285718]: 2026-01-27 09:21:04.565272043 +0000 UTC m=+0.023297158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:21:04 compute-0 podman[285718]: 2026-01-27 09:21:04.667438811 +0000 UTC m=+0.125463926 container init 2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:21:04 compute-0 podman[285718]: 2026-01-27 09:21:04.674775812 +0000 UTC m=+0.132800907 container start 2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:21:04 compute-0 podman[285718]: 2026-01-27 09:21:04.677223709 +0000 UTC m=+0.135248804 container attach 2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:21:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Jan 27 09:21:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:05.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:05 compute-0 ceph-mon[74357]: pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Jan 27 09:21:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:05.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]: {
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:     "0": [
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:         {
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "devices": [
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "/dev/loop3"
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             ],
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "lv_name": "ceph_lv0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "lv_size": "7511998464",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "name": "ceph_lv0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "tags": {
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.cluster_name": "ceph",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.crush_device_class": "",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.encrypted": "0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.osd_id": "0",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.type": "block",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:                 "ceph.vdo": "0"
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             },
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "type": "block",
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:             "vg_name": "ceph_vg0"
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:         }
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]:     ]
Jan 27 09:21:05 compute-0 stupefied_lichterman[285735]: }
Jan 27 09:21:05 compute-0 systemd[1]: libpod-2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8.scope: Deactivated successfully.
Jan 27 09:21:05 compute-0 podman[285718]: 2026-01-27 09:21:05.490436326 +0000 UTC m=+0.948461421 container died 2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ab1bc1a49324b4313b7fb03e9b785e8f1439dbbf3b42f1f00b1470baf2eee3e-merged.mount: Deactivated successfully.
Jan 27 09:21:05 compute-0 podman[285718]: 2026-01-27 09:21:05.584529743 +0000 UTC m=+1.042554838 container remove 2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:21:05 compute-0 systemd[1]: libpod-conmon-2707d3e55cadb32eeb6d8cd0de7afa1855a377fffa51a89200f30b9c403c2ce8.scope: Deactivated successfully.
Jan 27 09:21:05 compute-0 sudo[285613]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:05 compute-0 sudo[285758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:05 compute-0 sudo[285758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:05 compute-0 sudo[285758]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:05 compute-0 sudo[285783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:21:05 compute-0 sudo[285783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:05 compute-0 sudo[285783]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:05 compute-0 sudo[285808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:05 compute-0 sudo[285808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:05 compute-0 sudo[285808]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:05 compute-0 sudo[285833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:21:05 compute-0 sudo[285833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.168431861 +0000 UTC m=+0.043298116 container create 35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 09:21:06 compute-0 systemd[1]: Started libpod-conmon-35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7.scope.
Jan 27 09:21:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.146613384 +0000 UTC m=+0.021479659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.254290382 +0000 UTC m=+0.129156657 container init 35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.262864057 +0000 UTC m=+0.137730312 container start 35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.265669324 +0000 UTC m=+0.140535599 container attach 35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:06 compute-0 systemd[1]: libpod-35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7.scope: Deactivated successfully.
Jan 27 09:21:06 compute-0 dreamy_bhaskara[285915]: 167 167
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.268274795 +0000 UTC m=+0.143141070 container died 35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 27 09:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf960d33a6575dc1eaf5297dfdac2a8fd05188817391710f8bd16bd59c1aa4ea-merged.mount: Deactivated successfully.
Jan 27 09:21:06 compute-0 podman[285899]: 2026-01-27 09:21:06.307609282 +0000 UTC m=+0.182475537 container remove 35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:06 compute-0 systemd[1]: libpod-conmon-35f1360e470b7e86521c261252deb587f24aabf5a6a7dbcedccb8adc3594cba7.scope: Deactivated successfully.
Jan 27 09:21:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:06 compute-0 podman[285937]: 2026-01-27 09:21:06.477351241 +0000 UTC m=+0.053354892 container create 563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:21:06 compute-0 systemd[1]: Started libpod-conmon-563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a.scope.
Jan 27 09:21:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:21:06 compute-0 podman[285937]: 2026-01-27 09:21:06.450876615 +0000 UTC m=+0.026880286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d64cfff44b10ebc09b32e4aa27dffa823d6ea26042b70e93880af3bacaa93a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d64cfff44b10ebc09b32e4aa27dffa823d6ea26042b70e93880af3bacaa93a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d64cfff44b10ebc09b32e4aa27dffa823d6ea26042b70e93880af3bacaa93a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d64cfff44b10ebc09b32e4aa27dffa823d6ea26042b70e93880af3bacaa93a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:06 compute-0 podman[285937]: 2026-01-27 09:21:06.559602553 +0000 UTC m=+0.135606204 container init 563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:06 compute-0 podman[285937]: 2026-01-27 09:21:06.56715193 +0000 UTC m=+0.143155581 container start 563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:21:06 compute-0 podman[285937]: 2026-01-27 09:21:06.570736577 +0000 UTC m=+0.146740228 container attach 563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:21:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 27 09:21:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:07.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:07.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:07 compute-0 elated_volhard[285953]: {
Jan 27 09:21:07 compute-0 elated_volhard[285953]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:21:07 compute-0 elated_volhard[285953]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:21:07 compute-0 elated_volhard[285953]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:21:07 compute-0 elated_volhard[285953]:         "osd_id": 0,
Jan 27 09:21:07 compute-0 elated_volhard[285953]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:21:07 compute-0 elated_volhard[285953]:         "type": "bluestore"
Jan 27 09:21:07 compute-0 elated_volhard[285953]:     }
Jan 27 09:21:07 compute-0 elated_volhard[285953]: }
Jan 27 09:21:07 compute-0 systemd[1]: libpod-563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a.scope: Deactivated successfully.
Jan 27 09:21:07 compute-0 podman[285937]: 2026-01-27 09:21:07.4948012 +0000 UTC m=+1.070804851 container died 563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 09:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d64cfff44b10ebc09b32e4aa27dffa823d6ea26042b70e93880af3bacaa93a-merged.mount: Deactivated successfully.
Jan 27 09:21:07 compute-0 podman[285937]: 2026-01-27 09:21:07.789304004 +0000 UTC m=+1.365307655 container remove 563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 27 09:21:07 compute-0 sudo[285833]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:21:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:07 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:21:07 compute-0 systemd[1]: libpod-conmon-563ffb0810683cdca199c051a6b8283823d3678170aaa0026ab0e50d7eee3d4a.scope: Deactivated successfully.
Jan 27 09:21:07 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev b8f39104-ef40-44fd-ac3d-a0583264e18a does not exist
Jan 27 09:21:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 5fac60b4-0b9a-4cce-a5dd-5613fb9a9969 does not exist
Jan 27 09:21:07 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 7ee1237d-3fd3-4265-8939-f112d402ec69 does not exist
Jan 27 09:21:08 compute-0 sudo[285987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:08 compute-0 sudo[285987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:08 compute-0 sudo[285987]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:08 compute-0 sudo[286012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:21:08 compute-0 sudo[286012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:08 compute-0 sudo[286012]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:08 compute-0 ceph-mon[74357]: pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 27 09:21:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:08 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:21:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 27 09:21:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:09.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:10 compute-0 ceph-mon[74357]: pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 27 09:21:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 115 op/s
Jan 27 09:21:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:11 compute-0 ceph-mon[74357]: pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 115 op/s
Jan 27 09:21:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 102 op/s
Jan 27 09:21:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:13.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:14 compute-0 ceph-mon[74357]: pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 102 op/s
Jan 27 09:21:14 compute-0 podman[286040]: 2026-01-27 09:21:14.268833508 +0000 UTC m=+0.083175589 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 124 op/s
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:21:15
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:21:15 compute-0 ceph-mon[74357]: pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 124 op/s
Jan 27 09:21:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:15.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:21:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:21:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:15.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:15 compute-0 sudo[286067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:15 compute-0 sudo[286067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:15 compute-0 sudo[286067]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:15 compute-0 sudo[286092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:15 compute-0 sudo[286092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:15 compute-0 sudo[286092]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 27 09:21:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:17.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:17 compute-0 nova_compute[247671]: 2026-01-27 09:21:17.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:18 compute-0 ceph-mon[74357]: pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 27 09:21:18 compute-0 nova_compute[247671]: 2026-01-27 09:21:18.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 96 op/s
Jan 27 09:21:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:19.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:19 compute-0 ceph-mon[74357]: pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 96 op/s
Jan 27 09:21:20 compute-0 nova_compute[247671]: 2026-01-27 09:21:20.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:20 compute-0 nova_compute[247671]: 2026-01-27 09:21:20.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:21:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Jan 27 09:21:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:21.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:21 compute-0 nova_compute[247671]: 2026-01-27 09:21:21.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:22 compute-0 ceph-mon[74357]: pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Jan 27 09:21:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 27 09:21:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:23.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:23 compute-0 ceph-mon[74357]: pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 27 09:21:24 compute-0 nova_compute[247671]: 2026-01-27 09:21:24.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:24 compute-0 nova_compute[247671]: 2026-01-27 09:21:24.499 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:24 compute-0 nova_compute[247671]: 2026-01-27 09:21:24.499 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:21:24 compute-0 nova_compute[247671]: 2026-01-27 09:21:24.500 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:21:24 compute-0 nova_compute[247671]: 2026-01-27 09:21:24.533 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:21:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:21:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 27 09:21:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3862685883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:25.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:25 compute-0 nova_compute[247671]: 2026-01-27 09:21:25.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:25.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:26 compute-0 ceph-mon[74357]: pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 27 09:21:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/932516594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 27 09:21:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:28 compute-0 ceph-mon[74357]: pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 27 09:21:28 compute-0 podman[286123]: 2026-01-27 09:21:28.24702392 +0000 UTC m=+0.060292932 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.455 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.455 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.455 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.455 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.456 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:21:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:21:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007679533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:28 compute-0 nova_compute[247671]: 2026-01-27 09:21:28.936 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:21:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.151 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.153 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.153 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.153 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:21:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3591708451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3007679533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.279 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.280 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.280 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.321 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:21:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:29.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:21:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016391503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.801 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.807 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.883 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.885 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:21:29 compute-0 nova_compute[247671]: 2026-01-27 09:21:29.885 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:21:30 compute-0 ceph-mon[74357]: pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:21:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/672934254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1016391503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:21:30 compute-0 nova_compute[247671]: 2026-01-27 09:21:30.886 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:21:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:31 compute-0 ceph-mon[74357]: pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:21:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:33.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:34 compute-0 ceph-mon[74357]: pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:35.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:35 compute-0 nova_compute[247671]: 2026-01-27 09:21:35.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:21:35 compute-0 ceph-mon[74357]: pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:35.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:35 compute-0 sudo[286191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:35 compute-0 sudo[286191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:35 compute-0 sudo[286191]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:35 compute-0 sudo[286216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:35 compute-0 sudo[286216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:35 compute-0 sudo[286216]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:37.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:38 compute-0 ceph-mon[74357]: pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:39 compute-0 ceph-mon[74357]: pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:41.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:41 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:41.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:41 compute-0 ceph-mon[74357]: pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:43.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:44 compute-0 ceph-mon[74357]: pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:45 compute-0 podman[286246]: 2026-01-27 09:21:45.257548997 +0000 UTC m=+0.076866666 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:21:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:45.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:46 compute-0 ceph-mon[74357]: pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:47.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:47.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:48 compute-0 ceph-mon[74357]: pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:49 compute-0 ceph-mon[74357]: pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:49.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:49.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:51 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:51 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:51 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:51 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:51 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:51.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:52 compute-0 ceph-mon[74357]: pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:53 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:53 compute-0 ceph-mon[74357]: pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:53.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:53 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:53 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:53 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:53.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:21:54.260 159876 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:21:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:21:54.261 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:21:54 compute-0 ovn_metadata_agent[159871]: 2026-01-27 09:21:54.261 159876 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:21:55 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:21:55 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:55 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:55 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:56 compute-0 sudo[286277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:56 compute-0 sudo[286277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:56 compute-0 sudo[286277]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:56 compute-0 sudo[286302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:21:56 compute-0 sudo[286302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:21:56 compute-0 sudo[286302]: pam_unix(sudo:session): session closed for user root
Jan 27 09:21:56 compute-0 ceph-mon[74357]: pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:56 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:57 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:57 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:57 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:57 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:57.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:58 compute-0 ceph-mon[74357]: pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:59 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:59 compute-0 podman[286329]: 2026-01-27 09:21:59.226669702 +0000 UTC m=+0.045867496 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 09:21:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 27 09:21:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3401708900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:21:59 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 27 09:21:59 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3401708900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:21:59 compute-0 ceph-mon[74357]: pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:21:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:21:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:21:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:21:59 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:21:59 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:21:59 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:21:59.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3401708900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 27 09:22:00 compute-0 ceph-mon[74357]: from='client.? 192.168.122.10:0/3401708900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 27 09:22:01 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:01.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:01 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:01 compute-0 ceph-mon[74357]: pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:01 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:01 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:01 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:01.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:03 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:03.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:03 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:03 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:03 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:04 compute-0 ceph-mon[74357]: pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:05 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:05.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:05 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:05 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:05 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:06 compute-0 ceph-mon[74357]: pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:06 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:07 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:07.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:07 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:07 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:07 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:07.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:08 compute-0 ceph-mon[74357]: pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:08 compute-0 sudo[286354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:08 compute-0 sudo[286354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:08 compute-0 sudo[286354]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:08 compute-0 sudo[286379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:22:08 compute-0 sudo[286379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:08 compute-0 sudo[286379]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:08 compute-0 sudo[286404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:08 compute-0 sudo[286404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:08 compute-0 sudo[286404]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:08 compute-0 sudo[286430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 27 09:22:08 compute-0 sudo[286430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:09 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:09 compute-0 sudo[286430]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:22:09 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 1cf5b326-cfa6-4fa7-b13f-766caabcbed2 does not exist
Jan 27 09:22:09 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c1eea88d-2b15-4461-91b5-9f44049ee299 does not exist
Jan 27 09:22:09 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev f11d08a0-cbb2-4bbf-9d9e-59e63c645b55 does not exist
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:22:09 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:09.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:09 compute-0 ceph-mon[74357]: pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:09 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 27 09:22:09 compute-0 sudo[286486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:09 compute-0 sudo[286486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:09 compute-0 sudo[286486]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:09 compute-0 sudo[286511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:22:09 compute-0 sudo[286511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:09 compute-0 sudo[286511]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:09 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:09 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:09 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:09.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:09 compute-0 sudo[286536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:09 compute-0 sudo[286536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:09 compute-0 sudo[286536]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:09 compute-0 sudo[286561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 27 09:22:09 compute-0 sudo[286561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:09 compute-0 podman[286626]: 2026-01-27 09:22:09.920064688 +0000 UTC m=+0.052575130 container create 26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meitner, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:22:09 compute-0 systemd[1]: Started libpod-conmon-26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c.scope.
Jan 27 09:22:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:22:09 compute-0 podman[286626]: 2026-01-27 09:22:09.889363018 +0000 UTC m=+0.021873480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:22:10 compute-0 podman[286626]: 2026-01-27 09:22:10.002020612 +0000 UTC m=+0.134531084 container init 26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 27 09:22:10 compute-0 podman[286626]: 2026-01-27 09:22:10.008661983 +0000 UTC m=+0.141172415 container start 26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meitner, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:22:10 compute-0 blissful_meitner[286642]: 167 167
Jan 27 09:22:10 compute-0 systemd[1]: libpod-26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c.scope: Deactivated successfully.
Jan 27 09:22:10 compute-0 podman[286626]: 2026-01-27 09:22:10.015920912 +0000 UTC m=+0.148431354 container attach 26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:22:10 compute-0 podman[286626]: 2026-01-27 09:22:10.016493388 +0000 UTC m=+0.149003830 container died 26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:22:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b93598d13712c8c4b8a6af4dfd8bdfc63d1fc3980679826d4cd84516465fe50c-merged.mount: Deactivated successfully.
Jan 27 09:22:10 compute-0 podman[286626]: 2026-01-27 09:22:10.057714517 +0000 UTC m=+0.190224959 container remove 26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 27 09:22:10 compute-0 systemd[1]: libpod-conmon-26219137cbd9bf076d07109148a85a71818c1d1875280d688adee20be6c1015c.scope: Deactivated successfully.
Jan 27 09:22:10 compute-0 podman[286667]: 2026-01-27 09:22:10.251424771 +0000 UTC m=+0.082732117 container create d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 27 09:22:10 compute-0 systemd[1]: Started libpod-conmon-d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359.scope.
Jan 27 09:22:10 compute-0 podman[286667]: 2026-01-27 09:22:10.192685132 +0000 UTC m=+0.023992508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:22:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356a84bb58af682defd6f6caae25f72ad0d427beaf28de881248bb3b894ca655/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356a84bb58af682defd6f6caae25f72ad0d427beaf28de881248bb3b894ca655/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356a84bb58af682defd6f6caae25f72ad0d427beaf28de881248bb3b894ca655/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356a84bb58af682defd6f6caae25f72ad0d427beaf28de881248bb3b894ca655/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356a84bb58af682defd6f6caae25f72ad0d427beaf28de881248bb3b894ca655/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:10 compute-0 podman[286667]: 2026-01-27 09:22:10.467002784 +0000 UTC m=+0.298310150 container init d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_clarke, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:22:10 compute-0 podman[286667]: 2026-01-27 09:22:10.473654406 +0000 UTC m=+0.304961742 container start d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_clarke, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 27 09:22:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:22:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 27 09:22:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 27 09:22:10 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:10 compute-0 podman[286667]: 2026-01-27 09:22:10.533248678 +0000 UTC m=+0.364556054 container attach d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 27 09:22:11 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:11 compute-0 great_clarke[286683]: --> passed data devices: 0 physical, 1 LVM
Jan 27 09:22:11 compute-0 great_clarke[286683]: --> relative data size: 1.0
Jan 27 09:22:11 compute-0 great_clarke[286683]: --> All data devices are unavailable
Jan 27 09:22:11 compute-0 systemd[1]: libpod-d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359.scope: Deactivated successfully.
Jan 27 09:22:11 compute-0 podman[286667]: 2026-01-27 09:22:11.25971911 +0000 UTC m=+1.091026456 container died d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 09:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-356a84bb58af682defd6f6caae25f72ad0d427beaf28de881248bb3b894ca655-merged.mount: Deactivated successfully.
Jan 27 09:22:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:11 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:11 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:11 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:11 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:11 compute-0 ceph-mon[74357]: pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:11 compute-0 podman[286667]: 2026-01-27 09:22:11.710334969 +0000 UTC m=+1.541642315 container remove d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 27 09:22:11 compute-0 systemd[1]: libpod-conmon-d9d4e88d351bb8ab71fe66c46d6895a68210709a8beb4cdbc19f35b554419359.scope: Deactivated successfully.
Jan 27 09:22:11 compute-0 sudo[286561]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:11 compute-0 sudo[286716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:11 compute-0 sudo[286716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:11 compute-0 sudo[286716]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:11 compute-0 sudo[286741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:22:11 compute-0 sudo[286741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:11 compute-0 sudo[286741]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:11 compute-0 sudo[286766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:11 compute-0 sudo[286766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:11 compute-0 sudo[286766]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:11 compute-0 sudo[286791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- lvm list --format json
Jan 27 09:22:11 compute-0 sudo[286791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.238199313 +0000 UTC m=+0.022913059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.417269216 +0000 UTC m=+0.201982982 container create c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:22:12 compute-0 systemd[1]: Started libpod-conmon-c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07.scope.
Jan 27 09:22:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.624422188 +0000 UTC m=+0.409136144 container init c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.630864254 +0000 UTC m=+0.415577980 container start c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:22:12 compute-0 quirky_solomon[286873]: 167 167
Jan 27 09:22:12 compute-0 systemd[1]: libpod-c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07.scope: Deactivated successfully.
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.680818492 +0000 UTC m=+0.465532238 container attach c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.681624174 +0000 UTC m=+0.466337910 container died c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_solomon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9297a30ba08c122e52aca46fd8f6c3f27d3e26263a14c2bf30d58c911844b2d6-merged.mount: Deactivated successfully.
Jan 27 09:22:12 compute-0 podman[286857]: 2026-01-27 09:22:12.846036227 +0000 UTC m=+0.630749953 container remove c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_solomon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:22:12 compute-0 systemd[1]: libpod-conmon-c4287e5fd6a76bb33b8def93fb04af744a95e49845e0cf1183d553ea4465bf07.scope: Deactivated successfully.
Jan 27 09:22:13 compute-0 podman[286899]: 2026-01-27 09:22:13.004039663 +0000 UTC m=+0.046555355 container create 6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:22:13 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:13 compute-0 systemd[1]: Started libpod-conmon-6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a.scope.
Jan 27 09:22:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:22:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84465d3c58f741b3b97b661b51f4e668a2ebd49057630f74a73406f6dc2e8a2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84465d3c58f741b3b97b661b51f4e668a2ebd49057630f74a73406f6dc2e8a2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84465d3c58f741b3b97b661b51f4e668a2ebd49057630f74a73406f6dc2e8a2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84465d3c58f741b3b97b661b51f4e668a2ebd49057630f74a73406f6dc2e8a2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:13 compute-0 podman[286899]: 2026-01-27 09:22:12.983204673 +0000 UTC m=+0.025720385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:22:13 compute-0 podman[286899]: 2026-01-27 09:22:13.093471222 +0000 UTC m=+0.135986934 container init 6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:22:13 compute-0 podman[286899]: 2026-01-27 09:22:13.100051562 +0000 UTC m=+0.142567254 container start 6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 27 09:22:13 compute-0 podman[286899]: 2026-01-27 09:22:13.109670866 +0000 UTC m=+0.152186558 container attach 6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:22:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:13.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:13 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:13 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:13 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:13.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]: {
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:     "0": [
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:         {
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "devices": [
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "/dev/loop3"
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             ],
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "lv_name": "ceph_lv0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "lv_size": "7511998464",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=281e9bde-2795-59f4-98ac-90cf5b49a2de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c06a7c81-ab3c-42b8-812f-79473670be30,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "lv_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "name": "ceph_lv0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "tags": {
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.block_uuid": "2z841N-Kzt4-6d3a-EOlU-XbYK-wf39-OgRqab",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.cluster_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.cluster_name": "ceph",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.crush_device_class": "",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.encrypted": "0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.osd_fsid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.osd_id": "0",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.type": "block",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:                 "ceph.vdo": "0"
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             },
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "type": "block",
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:             "vg_name": "ceph_vg0"
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:         }
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]:     ]
Jan 27 09:22:13 compute-0 heuristic_snyder[286916]: }
Jan 27 09:22:13 compute-0 systemd[1]: libpod-6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a.scope: Deactivated successfully.
Jan 27 09:22:13 compute-0 podman[286899]: 2026-01-27 09:22:13.874465187 +0000 UTC m=+0.916980879 container died 6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 27 09:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-84465d3c58f741b3b97b661b51f4e668a2ebd49057630f74a73406f6dc2e8a2e-merged.mount: Deactivated successfully.
Jan 27 09:22:14 compute-0 podman[286899]: 2026-01-27 09:22:14.067321778 +0000 UTC m=+1.109837470 container remove 6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:22:14 compute-0 systemd[1]: libpod-conmon-6502bd7e61f6e62b24782e26fa28aa397988082f41a9ea36ebd9f4e69687b27a.scope: Deactivated successfully.
Jan 27 09:22:14 compute-0 sudo[286791]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:14 compute-0 sudo[286938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:14 compute-0 ceph-mon[74357]: pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:14 compute-0 sudo[286938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:14 compute-0 sudo[286938]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:14 compute-0 sudo[286963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 09:22:14 compute-0 sudo[286963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:14 compute-0 sudo[286963]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:14 compute-0 sudo[286988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:14 compute-0 sudo[286988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:14 compute-0 sudo[286988]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:14 compute-0 sudo[287013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/281e9bde-2795-59f4-98ac-90cf5b49a2de/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 281e9bde-2795-59f4-98ac-90cf5b49a2de -- raw list --format json
Jan 27 09:22:14 compute-0 sudo[287013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:14 compute-0 podman[287080]: 2026-01-27 09:22:14.654161276 +0000 UTC m=+0.050080491 container create 6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:22:14 compute-0 systemd[1]: Started libpod-conmon-6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7.scope.
Jan 27 09:22:14 compute-0 podman[287080]: 2026-01-27 09:22:14.627011434 +0000 UTC m=+0.022930669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:22:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:22:14 compute-0 podman[287080]: 2026-01-27 09:22:14.752273513 +0000 UTC m=+0.148192748 container init 6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 27 09:22:14 compute-0 podman[287080]: 2026-01-27 09:22:14.763005777 +0000 UTC m=+0.158924992 container start 6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldstine, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 27 09:22:14 compute-0 brave_goldstine[287096]: 167 167
Jan 27 09:22:14 compute-0 systemd[1]: libpod-6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7.scope: Deactivated successfully.
Jan 27 09:22:14 compute-0 conmon[287096]: conmon 6430411cd9256627729e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7.scope/container/memory.events
Jan 27 09:22:14 compute-0 podman[287080]: 2026-01-27 09:22:14.773278699 +0000 UTC m=+0.169197914 container attach 6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:22:14 compute-0 podman[287080]: 2026-01-27 09:22:14.77513151 +0000 UTC m=+0.171050725 container died 6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:22:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de00840f96fec214b0507d232035dfc88eeafc7f8ac957ed52f14fca409d4c6-merged.mount: Deactivated successfully.
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:15 compute-0 podman[287080]: 2026-01-27 09:22:15.123393466 +0000 UTC m=+0.519312681 container remove 6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Optimize plan auto_2026-01-27_09:22:15
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [balancer INFO root] do_upmap
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [balancer INFO root] prepared 0/10 changes
Jan 27 09:22:15 compute-0 systemd[1]: libpod-conmon-6430411cd9256627729e6d013e0eab35ab8482bd2c8a9b07b08afda02a261ff7.scope: Deactivated successfully.
Jan 27 09:22:15 compute-0 podman[287123]: 2026-01-27 09:22:15.282008619 +0000 UTC m=+0.042945017 container create a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 27 09:22:15 compute-0 systemd[1]: Started libpod-conmon-a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642.scope.
Jan 27 09:22:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 09:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450d16aec085cccb49c18ec9382a607bb229e7551074bd9496afc7455fe1af83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450d16aec085cccb49c18ec9382a607bb229e7551074bd9496afc7455fe1af83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450d16aec085cccb49c18ec9382a607bb229e7551074bd9496afc7455fe1af83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450d16aec085cccb49c18ec9382a607bb229e7551074bd9496afc7455fe1af83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:15 compute-0 ceph-mon[74357]: pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:15 compute-0 podman[287123]: 2026-01-27 09:22:15.357923228 +0000 UTC m=+0.118859646 container init a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:22:15 compute-0 podman[287123]: 2026-01-27 09:22:15.264698975 +0000 UTC m=+0.025635393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 27 09:22:15 compute-0 podman[287123]: 2026-01-27 09:22:15.365378632 +0000 UTC m=+0.126315030 container start a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:22:15 compute-0 podman[287123]: 2026-01-27 09:22:15.369341861 +0000 UTC m=+0.130278259 container attach a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:22:15 compute-0 podman[287137]: 2026-01-27 09:22:15.413780428 +0000 UTC m=+0.092055422 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:22:15 compute-0 ceph-mgr[74650]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:22:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:15 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:15 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:15 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:15 compute-0 sshd-session[287170]: Accepted publickey for zuul from 192.168.122.10 port 59246 ssh2: ECDSA SHA256:f5Z0m2dkHn65zqcIWhGOpceeRGGTJBJfAENb5pouMns
Jan 27 09:22:15 compute-0 systemd-logind[799]: New session 52 of user zuul.
Jan 27 09:22:16 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 27 09:22:16 compute-0 sshd-session[287170]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 09:22:16 compute-0 sudo[287179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 09:22:16 compute-0 sudo[287179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]: {
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:     "c06a7c81-ab3c-42b8-812f-79473670be30": {
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:         "ceph_fsid": "281e9bde-2795-59f4-98ac-90cf5b49a2de",
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:         "osd_id": 0,
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:         "osd_uuid": "c06a7c81-ab3c-42b8-812f-79473670be30",
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:         "type": "bluestore"
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]:     }
Jan 27 09:22:16 compute-0 determined_goldwasser[287140]: }
Jan 27 09:22:16 compute-0 systemd[1]: libpod-a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642.scope: Deactivated successfully.
Jan 27 09:22:16 compute-0 podman[287123]: 2026-01-27 09:22:16.19432878 +0000 UTC m=+0.955265178 container died a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 27 09:22:16 compute-0 sudo[287217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:16 compute-0 sudo[287217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:16 compute-0 sudo[287217]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:16 compute-0 sudo[287253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:16 compute-0 sudo[287253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:16 compute-0 sudo[287253]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-450d16aec085cccb49c18ec9382a607bb229e7551074bd9496afc7455fe1af83-merged.mount: Deactivated successfully.
Jan 27 09:22:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:16 compute-0 podman[287123]: 2026-01-27 09:22:16.64424537 +0000 UTC m=+1.405181768 container remove a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:22:16 compute-0 sudo[287013]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 27 09:22:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:22:16 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 27 09:22:16 compute-0 ceph-mon[74357]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:22:16 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev c8718be6-a3e9-423d-996b-26b22ccad9ac does not exist
Jan 27 09:22:16 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev e333a903-2972-4e08-add4-d37448308607 does not exist
Jan 27 09:22:16 compute-0 ceph-mgr[74650]: [progress WARNING root] complete: ev 02185403-bb7b-4bd3-85d4-6c96ba154d96 does not exist
Jan 27 09:22:17 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:17 compute-0 systemd[1]: libpod-conmon-a6116a1973e8e1f2d609bf6795c4cda2254a42260a93711942601f45b86fc642.scope: Deactivated successfully.
Jan 27 09:22:17 compute-0 sudo[287286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:17 compute-0 sudo[287286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:17 compute-0 sudo[287286]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:17 compute-0 sudo[287311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 09:22:17 compute-0 sudo[287311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:17 compute-0 sudo[287311]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:17 compute-0 nova_compute[247671]: 2026-01-27 09:22:17.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:17.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:17 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:17 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:17 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:17.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:22:17 compute-0 ceph-mon[74357]: from='mgr.14132 192.168.122.100:0/2133798630' entity='mgr.compute-0.vujqxq' 
Jan 27 09:22:17 compute-0 ceph-mon[74357]: pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:18 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17754 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:18 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27491 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:19 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:19 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27497 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:19 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17763 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:19 compute-0 ceph-mon[74357]: from='client.17754 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:19 compute-0 ceph-mon[74357]: from='client.27491 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:19 compute-0 ceph-mon[74357]: pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:19 compute-0 ceph-mon[74357]: from='client.27497 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:19.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:19 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:19 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:19 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:19.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:19 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 27 09:22:19 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278966121' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 27 09:22:20 compute-0 ceph-mon[74357]: from='client.17763 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4218704138' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 27 09:22:20 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1278966121' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 27 09:22:20 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:20 compute-0 nova_compute[247671]: 2026-01-27 09:22:20.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:20 compute-0 nova_compute[247671]: 2026-01-27 09:22:20.424 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:20 compute-0 nova_compute[247671]: 2026-01-27 09:22:20.424 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 09:22:20 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27562 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:21 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:21 compute-0 ceph-mon[74357]: from='client.27556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:21 compute-0 ceph-mon[74357]: from='client.27562 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:21 compute-0 ceph-mon[74357]: pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:21.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:21 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:21 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:21 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:21 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:22 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3593143662' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 27 09:22:23 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:23 compute-0 nova_compute[247671]: 2026-01-27 09:22:23.418 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:23.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:23 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:23 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:23 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:23.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:23 compute-0 ceph-mon[74357]: pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:24 compute-0 ovs-vsctl[287628]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 27 09:22:24 compute-0 ceph-mgr[74650]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 27 09:22:25 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:25 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3297105949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:25 compute-0 virtqemud[248823]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 27 09:22:25 compute-0 virtqemud[248823]: hostname: compute-0
Jan 27 09:22:25 compute-0 virtqemud[248823]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 09:22:25 compute-0 virtqemud[248823]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 09:22:25 compute-0 nova_compute[247671]: 2026-01-27 09:22:25.423 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:25 compute-0 nova_compute[247671]: 2026-01-27 09:22:25.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 09:22:25 compute-0 nova_compute[247671]: 2026-01-27 09:22:25.423 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 09:22:25 compute-0 virtqemud[248823]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 09:22:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:25.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:25 compute-0 nova_compute[247671]: 2026-01-27 09:22:25.446 247675 DEBUG nova.compute.manager [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 09:22:25 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:25 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:25 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:25.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: cache status {prefix=cache status} (starting...)
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:26 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27509 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: client ls {prefix=client ls} (starting...)
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:26 compute-0 lvm[287992]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:22:26 compute-0 lvm[287992]: VG ceph_vg0 finished
Jan 27 09:22:26 compute-0 ceph-mon[74357]: pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:26 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1236924223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:26 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27515 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:26 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 27 09:22:26 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:26 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17781 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: damage ls {prefix=damage ls} (starting...)
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: dump loads {prefix=dump loads} (starting...)
Jan 27 09:22:26 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17787 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 27 09:22:27 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437964284' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27542 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:27 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:27.346+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 nova_compute[247671]: 2026-01-27 09:22:27.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:27.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.27509 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.27515 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2326547679' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.17781 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2252098716' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.17787 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/437964284' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2211193706' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:27 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:27 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:27 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 27 09:22:27 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881590466' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 27 09:22:27 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:27 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17811 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:27 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:27 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:27.972+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:28 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: ops {prefix=ops} (starting...)
Jan 27 09:22:28 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:28 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27589 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27578 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 27 09:22:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848302133' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.27542 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1881590466' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2214970822' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3491703296' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.17811 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/677483387' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.27589 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2648408741' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3848302133' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 27 09:22:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344461232' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 27 09:22:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27601 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27593 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 27 09:22:28 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482064287' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: session ls {prefix=session ls} (starting...)
Jan 27 09:22:28 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum Can't run that command on an inactive MDS!
Jan 27 09:22:28 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17841 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:28 compute-0 ceph-mds[96364]: mds.cephfs.compute-0.ceuaum asok_command: status {prefix=status} (starting...)
Jan 27 09:22:29 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:29 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17862 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27634 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:29 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:29.332+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 27 09:22:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520423646' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 27 09:22:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.421 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.422 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.453 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.454 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.454 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.27578 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2344461232' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4257583055' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.27601 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/99302042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.27593 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/482064287' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.17841 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3552950163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1645061697' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/618770394' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/786299643' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1520423646' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2747810256' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:29 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:29 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:29 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 27 09:22:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2930244984' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 27 09:22:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228522079' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:29 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:22:29 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244401394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:29 compute-0 nova_compute[247671]: 2026-01-27 09:22:29.978 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27664 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 nova_compute[247671]: 2026-01-27 09:22:30.144 247675 WARNING nova.virt.libvirt.driver [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 09:22:30 compute-0 nova_compute[247671]: 2026-01-27 09:22:30.146 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4891MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 09:22:30 compute-0 nova_compute[247671]: 2026-01-27 09:22:30.146 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 09:22:30 compute-0 nova_compute[247671]: 2026-01-27 09:22:30.146 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 09:22:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 09:22:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628777137' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:30 compute-0 podman[288491]: 2026-01-27 09:22:30.246612462 +0000 UTC m=+0.058345388 container health_status 5a2935001f94dc377c75787947a55f8b89b1d25119246ac9e874c96c915c5089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 09:22:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 27 09:22:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422578471' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27650 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:22:30 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:30.314+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27676 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 27 09:22:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4001622867' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17907 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:22:30 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:30.699+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.17862 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.27634 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3247432258' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3461318373' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/250399343' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2930244984' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3228522079' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/674817609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/947391014' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3244401394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1194610975' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.27664 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3821773086' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1628777137' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3422578471' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/605511411' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/615165253' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4001622867' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27677 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:30 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 27 09:22:30 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 27 09:22:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837255923' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 27 09:22:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142657194' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27689 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:31.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:31 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:31 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:31 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:31 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27698 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27704 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27742 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:22:31 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:31.764+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:22:31 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 27 09:22:31 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742516695' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.27650 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.27676 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.17907 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/453316866' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/139967988' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.27677 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3868827201' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1970834381' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/837255923' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4142657194' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1050181308' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3517073975' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3058225710' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3183300267' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 27 09:22:31 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2742516695' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17952 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 27 09:22:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3708348845' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27757 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 27 09:22:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780761255' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.17973 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27769 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:18.355862+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:19.356085+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:20.356281+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:21.356454+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:22.356642+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:23.356800+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:24.357066+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:25.357258+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:26.357473+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:27.357658+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:28.357842+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 595.897521973s of 600.386718750s, submitted: 244
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:29.358046+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874838 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:30.358195+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 319488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:31.358333+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:32.358464+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:33.358627+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:34.358792+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:35.358927+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:36.359110+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:37.359242+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:38.359623+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:39.359903+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:40.360076+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:41.360228+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:42.360372+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:43.360565+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:44.360743+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:45.360934+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:46.361091+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:47.361212+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:48.361335+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:49.361495+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:50.361706+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:51.361855+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:52.362007+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:53.362165+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1220608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:54.362344+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:55.362516+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:56.362700+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1212416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:57.362946+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:58.363127+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 1204224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:50:59.363287+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1327104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:00.363469+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:01.363647+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:02.363826+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:03.364015+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1318912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:04.364193+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:05.364405+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:06.364650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:07.364799+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:08.364941+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:09.365077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:10.365273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:11.365417+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:12.365573+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:13.365712+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:14.365870+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:15.366119+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:16.366290+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:17.366420+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:18.366549+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:19.366720+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:20.366956+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:21.367104+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:22.367264+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:23.367390+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:24.367521+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:25.367660+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:26.367847+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:27.367938+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:28.368086+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:29.368245+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:30.368427+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:31.368591+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:32.368765+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 1310720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:33.368951+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:34.369126+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:35.369365+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:36.369721+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:37.370006+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:38.370183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:39.370476+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:40.370703+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:41.370944+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:42.371198+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:43.371407+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:44.371600+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:45.371781+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:46.372018+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:47.372196+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:48.372413+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:49.372602+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:50.372733+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:51.372866+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:52.372993+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:53.373125+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:54.373308+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:55.373555+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:56.373749+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:57.373897+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:58.374038+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:51:59.374238+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:00.374432+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874766 data_alloc: 218103808 data_used: 102400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:01.374587+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 heartbeat osd_stat(store_statfs(0x1bca29000/0x0/0x1bfc00000, data 0x13c699/0x1f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:02.374743+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1302528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:03.374897+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 93.958160400s of 94.872962952s, submitted: 258
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:04.375058+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1196032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 134 heartbeat osd_stat(store_statfs(0x1bca25000/0x0/0x1bfc00000, data 0x13e2f2/0x1f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:05.375190+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916830 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 134 ms_handle_reset con 0x558696da1800 session 0x5586959261e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 10461184 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:06.375403+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 10444800 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b2000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:07.375584+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 ms_handle_reset con 0x5586977b2000 session 0x55869774f2c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:08.375726+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:09.375930+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:10.376168+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:11.376365+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:12.376584+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:13.376718+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:14.376874+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:15.377075+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:16.377284+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:17.377481+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 10223616 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:18.377675+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:19.377848+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:20.378124+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:21.378308+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:22.378530+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:23.378667+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:24.378973+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:25.379195+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:26.379372+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:27.379546+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:28.379724+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:29.379876+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:30.380076+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:31.380217+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:32.380357+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:33.380496+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:34.380721+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:35.380906+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:36.381242+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:37.381466+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:38.381702+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:39.381869+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:40.382015+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955535 data_alloc: 218103808 data_used: 110592
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:41.382192+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:42.382393+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:43.382598+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:44.382814+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 10215424 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:45.383000+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955695 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:46.383235+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:47.383426+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:48.383564+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:49.383930+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:50.384331+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955695 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:51.388030+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10199040 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 heartbeat osd_stat(store_statfs(0x1bc139000/0x0/0x1bfc00000, data 0xa238d8/0xae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.218475342s of 48.382373810s, submitted: 54
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:52.388204+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10190848 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b2400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 138 ms_handle_reset con 0x5586977b2400 session 0x55869752cb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:53.388341+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:54.388509+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:55.388730+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964235 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:56.388945+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:57.389194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 138 heartbeat osd_stat(store_statfs(0x1bc133000/0x0/0x1bfc00000, data 0xa271de/0xae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:58.389351+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:52:59.389491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:00.389711+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964235 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:01.389915+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 138 heartbeat osd_stat(store_statfs(0x1bc133000/0x0/0x1bfc00000, data 0xa271de/0xae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:02.390040+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.471798897s of 10.636608124s, submitted: 16
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:03.390191+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:04.390331+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:05.390490+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966537 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:06.390687+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:07.390923+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558695813400 session 0x558697721e00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 9158656 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f3c00 session 0x5586975ac960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:08.391102+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:09.391291+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:10.391503+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966534 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:11.391739+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:12.391972+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:13.392126+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:14.392301+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:15.392488+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966534 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:16.392726+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:17.392947+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:18.393188+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:19.393488+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:20.393641+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966534 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:21.393791+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:22.394061+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 9150464 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:23.394236+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:24.394389+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:25.394554+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966534 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:26.394870+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:27.395195+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:28.395329+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:29.395468+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:30.395687+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966534 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:31.395923+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:32.396138+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:33.396323+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:34.396547+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:35.396698+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966534 data_alloc: 218103808 data_used: 114688
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:36.396878+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:37.397054+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 9142272 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:38.397379+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f2c00 session 0x55869773cb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558696da1800 session 0x558697735a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4612096 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:39.397600+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4612096 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:40.397864+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978054 data_alloc: 218103808 data_used: 4771840
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4612096 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:41.398062+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4612096 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:42.398470+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4612096 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:43.398672+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869492e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.945446014s of 41.134548187s, submitted: 35
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x55869492e400 session 0x55869773c780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 4628480 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:44.398850+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 4628480 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:45.399047+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979848 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 4628480 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:46.399291+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f2c00 session 0x558696b4c780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f3c00 session 0x5586974b81e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2793472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558695813400 session 0x5586956ecd20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:47.399591+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558696da1800 session 0x558694aeeb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 10936320 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:48.399816+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 10936320 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:49.399994+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 10936320 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:50.400204+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bba40000/0x0/0x1bfc00000, data 0x1119d7f/0x11de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040444 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 10936320 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:51.400475+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 10936320 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:52.400732+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 10936320 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:53.400915+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b4800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.090225220s of 10.322626114s, submitted: 58
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 10911744 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:54.401160+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586977b4800 session 0x558697719c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 11419648 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:55.401349+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985220 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 11419648 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:56.402071+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 11419648 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:57.402268+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 11419648 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:58.402574+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:53:59.402976+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:00.403186+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985220 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:01.403376+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:02.403609+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:03.403798+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:04.404078+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:05.404281+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985220 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:06.404489+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:07.404779+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:08.404930+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:10.006497+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:11.006637+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985220 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:12.006775+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:13.006955+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:14.007193+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:15.007339+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:16.007464+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985220 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:17.007610+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:18.007742+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:19.007917+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:20.008082+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:21.008264+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985220 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:22.008539+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f2c00 session 0x55869752c960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 11427840 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f3c00 session 0x5586977492c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:23.008721+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 11427840 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:24.008964+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.073196411s of 30.132829666s, submitted: 19
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558695813400 session 0x5586975354a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:25.009109+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d2d/0xaed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d2d/0xaed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:26.009255+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986872 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:27.009415+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558696da1800 session 0x55869773d4a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b4c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586977b4c00 session 0x558694625c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f2c00 session 0x5586974b9860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d2d/0xaed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:28.009563+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f3c00 session 0x5586975ac5a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:29.009696+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:30.009819+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:31.009997+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986872 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:32.010123+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d2d/0xaed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:33.010316+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:34.010457+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:35.010598+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d2d/0xaed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:36.010744+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986872 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 11411456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:37.010917+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558695813400 session 0x5586977021e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.880990028s of 12.937719345s, submitted: 10
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558696da1800 session 0x55869779e1e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:38.011077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:39.011277+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:40.011457+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:41.011632+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:42.011806+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:43.011953+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:44.012074+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:45.012357+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:46.012717+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:47.012959+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:48.013258+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:49.013466+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:50.013812+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:51.013996+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:52.014193+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:53.014360+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:54.014497+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:55.014675+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:56.014844+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:57.014973+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:58.015210+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:54:59.015374+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:00.015485+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:01.015596+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:02.015713+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:03.015823+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:04.016167+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:05.016315+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:06.016487+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:07.016690+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:08.016869+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:09.017056+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:10.017243+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:11.017375+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:12.017560+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:13.017704+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:14.017840+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:15.017963+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:16.018076+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:17.018220+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:18.018347+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:19.018475+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:20.018591+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:21.018745+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:22.018877+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 11395072 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:23.018978+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:24.019105+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:25.019215+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:26.019333+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:27.019517+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:28.019603+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:29.019867+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:30.020041+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:31.020169+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:32.020279+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:33.020365+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:34.020455+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:35.020567+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:36.020697+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:37.021075+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:38.021183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:39.021303+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:40.021412+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:41.021554+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985044 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:42.021696+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:43.021823+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:44.021974+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b5000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 67.367256165s of 67.392700195s, submitted: 6
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 11386880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586977b5000 session 0x5586974b9c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:45.022229+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f2c00 session 0x558697720960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:46.022396+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986541 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f3c00 session 0x5586967ea5a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:47.022539+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558695813400 session 0x558695942780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:48.022663+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:49.023082+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:50.023235+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:51.023347+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987578 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:52.023470+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:53.023580+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:54.023699+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:55.023812+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:56.023943+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987578 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:57.024123+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:58.024248+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc131000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:55:59.024362+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:00.024508+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:01.024632+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987578 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.864265442s of 17.095836639s, submitted: 13
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 11403264 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:02.024748+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 11223040 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x558696da1800 session 0x55869779f2c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:03.024937+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 11223040 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:04.025053+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b5000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586977b5000 session 0x558696d42960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 11198464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:05.025206+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 11165696 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:06.025344+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986673 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 11042816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:07.025518+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 ms_handle_reset con 0x5586956f2c00 session 0x5586977c8000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 11042816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:08.025669+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 11034624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:09.025830+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 11034624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:10.026004+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 11034624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:11.026124+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985949 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 11034624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:12.026267+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 11034624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:13.026446+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 11034624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:14.026675+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 11026432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:15.026796+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 11026432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:16.026981+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985949 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:17.027183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:18.027341+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:19.027521+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:20.027706+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:21.027875+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985949 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:22.028155+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:23.028327+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11018240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:24.028470+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 11010048 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:25.028616+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 11010048 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:26.028749+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985949 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 11010048 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:27.028968+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 11010048 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:28.029139+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 11001856 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:29.029307+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 11001856 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:30.029504+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 10993664 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:31.029658+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985949 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 10993664 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:32.030073+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 10993664 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:33.030288+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 10993664 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:34.030409+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 10993664 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:35.030530+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.466094971s of 33.925270081s, submitted: 193
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 heartbeat osd_stat(store_statfs(0x1bc132000/0x0/0x1bfc00000, data 0xa28d1d/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 10985472 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:36.030682+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989686 data_alloc: 218103808 data_used: 4780032
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10977280 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 140 ms_handle_reset con 0x5586956f3c00 session 0x5586953cc000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:37.030868+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10977280 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:38.031008+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10977280 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:39.031127+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 140 heartbeat osd_stat(store_statfs(0x1bc12b000/0x0/0x1bfc00000, data 0xa2b322/0xaf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10977280 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:40.031274+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 11272192 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:41.031420+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 141 ms_handle_reset con 0x558695813400 session 0x55869779fe00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996364 data_alloc: 218103808 data_used: 4796416
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:42.031560+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:43.031725+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:44.032028+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:45.032219+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 141 heartbeat osd_stat(store_statfs(0x1bc12a000/0x0/0x1bfc00000, data 0xa2c623/0xaf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:46.032344+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996524 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:47.032522+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:48.032723+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 10215424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.663561821s of 12.922884941s, submitted: 46
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:49.032947+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:50.033109+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:51.033249+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:52.033415+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:53.033615+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:54.033776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:55.033972+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:56.034150+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 10207232 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:57.034332+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 10199040 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:58.034513+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 10199040 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:56:59.034692+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:00.034831+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:01.034964+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:02.035150+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:03.035289+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:04.035471+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:05.035615+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:06.035760+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:07.035953+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:08.036089+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 10190848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:09.036228+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:10.036432+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:11.036587+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:12.036734+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:13.036870+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:14.037083+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:15.037235+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:16.037375+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:17.037722+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:18.037927+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 10182656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:19.038075+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:20.038260+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:21.038448+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:22.038671+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:23.038875+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:24.039152+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:25.039340+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:26.039471+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:27.039648+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:28.039783+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 10174464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:29.039971+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 10166272 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:30.040157+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 10166272 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:31.040346+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 10166272 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 heartbeat osd_stat(store_statfs(0x1bc128000/0x0/0x1bfc00000, data 0xa2e162/0xaf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998397 data_alloc: 218103808 data_used: 4800512
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:32.040563+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 10166272 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:33.040767+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 10166272 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.291366577s of 45.388069153s, submitted: 14
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:34.040958+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 10158080 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 ms_handle_reset con 0x558696da1800 session 0x558695942960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:35.041161+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 heartbeat osd_stat(store_statfs(0x1bc125000/0x0/0x1bfc00000, data 0xa2fdbb/0xaf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:36.041356+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 heartbeat osd_stat(store_statfs(0x1bc125000/0x0/0x1bfc00000, data 0xa2fdbb/0xaf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003188 data_alloc: 218103808 data_used: 4808704
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:37.041559+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:38.041740+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:39.041867+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:40.041988+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 heartbeat osd_stat(store_statfs(0x1bc125000/0x0/0x1bfc00000, data 0xa2fdbb/0xaf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:41.042129+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003188 data_alloc: 218103808 data_used: 4808704
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:42.042275+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:43.042412+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:44.042594+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b5400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 ms_handle_reset con 0x5586977b5400 session 0x5586977181e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:45.042736+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 10149888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 ms_handle_reset con 0x5586956f2c00 session 0x558696b4d860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 heartbeat osd_stat(store_statfs(0x1bc125000/0x0/0x1bfc00000, data 0xa2fdbb/0xaf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:46.042938+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.005071640s of 12.063801765s, submitted: 12
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 10133504 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010407 data_alloc: 218103808 data_used: 4820992
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:47.043098+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 ms_handle_reset con 0x5586956f3c00 session 0x55869779f860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:48.043268+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:49.043429+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:50.043598+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 heartbeat osd_stat(store_statfs(0x1bc120000/0x0/0x1bfc00000, data 0xa31a5a/0xafd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:51.043743+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 heartbeat osd_stat(store_statfs(0x1bc120000/0x0/0x1bfc00000, data 0xa31a5a/0xafd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:52.043955+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010351 data_alloc: 218103808 data_used: 4820992
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:53.044137+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:54.044284+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:55.044477+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 10125312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 heartbeat osd_stat(store_statfs(0x1bc120000/0x0/0x1bfc00000, data 0xa31a5a/0xafd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:56.044686+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 10117120 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:57.044951+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010351 data_alloc: 218103808 data_used: 4820992
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 10117120 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:58.045139+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 10117120 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:57:59.045330+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 10117120 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:00.045464+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 10117120 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 heartbeat osd_stat(store_statfs(0x1bc120000/0x0/0x1bfc00000, data 0xa31a5a/0xafd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:01.045590+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 10117120 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.155312538s of 15.214276314s, submitted: 14
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 heartbeat osd_stat(store_statfs(0x1bc122000/0x0/0x1bfc00000, data 0xa31a37/0xafc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:02.045729+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008781 data_alloc: 218103808 data_used: 4820992
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 10076160 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 145 ms_handle_reset con 0x558695813400 session 0x558696d42b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 145 ms_handle_reset con 0x558696da1800 session 0x558696d392c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:03.045856+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 10043392 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:04.045973+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 10043392 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:05.046182+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 10043392 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:06.046333+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 10043392 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:07.046557+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 145 heartbeat osd_stat(store_statfs(0x1bbd0e000/0x0/0x1bfc00000, data 0xa336c1/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011371 data_alloc: 218103808 data_used: 4829184
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 10043392 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:08.046737+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 10043392 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:09.046859+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:10.046971+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 146 heartbeat osd_stat(store_statfs(0x1bbd0e000/0x0/0x1bfc00000, data 0xa336c1/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:11.047677+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:12.047861+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013244 data_alloc: 218103808 data_used: 4829184
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:13.048110+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:14.048269+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:15.048400+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 146 heartbeat osd_stat(store_statfs(0x1bbd0c000/0x0/0x1bfc00000, data 0xa35200/0xb01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 146 heartbeat osd_stat(store_statfs(0x1bbd0c000/0x0/0x1bfc00000, data 0xa35200/0xb01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:16.048541+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:17.048764+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013244 data_alloc: 218103808 data_used: 4829184
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:18.048974+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:19.049143+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 146 heartbeat osd_stat(store_statfs(0x1bbd0c000/0x0/0x1bfc00000, data 0xa35200/0xb01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:20.049274+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 10035200 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 146 heartbeat osd_stat(store_statfs(0x1bbd0c000/0x0/0x1bfc00000, data 0xa35200/0xb01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b5800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.076614380s of 19.276321411s, submitted: 79
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:21.049436+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 10387456 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 147 ms_handle_reset con 0x5586977b5800 session 0x558697720b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:22.049627+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022045 data_alloc: 218103808 data_used: 4837376
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 10362880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:23.049801+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 10362880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:24.049958+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 10362880 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 147 heartbeat osd_stat(store_statfs(0x1bbd06000/0x0/0x1bfc00000, data 0xa36eaf/0xb07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:25.050101+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 10354688 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:26.050240+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 10354688 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 148 ms_handle_reset con 0x5586956f2c00 session 0x558695928780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:27.050407+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028044 data_alloc: 218103808 data_used: 4845568
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 10346496 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:28.050539+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 10346496 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:29.050713+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 148 heartbeat osd_stat(store_statfs(0x1bbd02000/0x0/0x1bfc00000, data 0xa38b2b/0xb0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 10346496 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:30.054388+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 9297920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.723957539s of 10.021198273s, submitted: 31
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:31.054543+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 10346496 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 148 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 149 ms_handle_reset con 0x5586956f3c00 session 0x55869774e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:32.054704+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032959 data_alloc: 218103808 data_used: 4853760
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 10330112 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:33.055023+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 10330112 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:34.055187+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 10330112 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:35.055311+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 149 heartbeat osd_stat(store_statfs(0x1bbcfe000/0x0/0x1bfc00000, data 0xa3a7a7/0xb0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 10330112 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:36.055501+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 10321920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:37.055683+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032959 data_alloc: 218103808 data_used: 4853760
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 10321920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:38.055819+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 10321920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 149 heartbeat osd_stat(store_statfs(0x1bbcfe000/0x0/0x1bfc00000, data 0xa3a7a7/0xb0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:39.056000+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 10321920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:40.056194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 10321920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:41.056385+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.570177078s of 10.865679741s, submitted: 3
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 10321920 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:42.056515+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032151 data_alloc: 218103808 data_used: 4853760
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 10297344 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 150 ms_handle_reset con 0x558695813400 session 0x5586975c3680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:43.056655+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 9232384 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:44.056746+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 150 heartbeat osd_stat(store_statfs(0x1bbcfb000/0x0/0x1bfc00000, data 0xa3c431/0xb11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 9232384 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:45.056870+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 9216000 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:46.057016+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86630400 unmapped: 9191424 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:47.057230+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037088 data_alloc: 218103808 data_used: 4890624
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86671360 unmapped: 9150464 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:48.057365+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 151 ms_handle_reset con 0x558696da1800 session 0x5586977a01e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 9166848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:49.057539+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b5c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 151 heartbeat osd_stat(store_statfs(0x1bbcfb000/0x0/0x1bfc00000, data 0xa3e0bb/0xb13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 9166848 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 151 heartbeat osd_stat(store_statfs(0x1bbcfb000/0x0/0x1bfc00000, data 0xa3e0bb/0xb13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:50.057650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 9158656 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 153 ms_handle_reset con 0x5586977b5c00 session 0x558695934000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:51.057782+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9125888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:52.057986+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042221 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9125888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:53.058092+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9125888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:54.058219+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9125888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 153 heartbeat osd_stat(store_statfs(0x1bbcf5000/0x0/0x1bfc00000, data 0xa4186d/0xb16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:55.058417+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9125888 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.043816566s of 14.378234863s, submitted: 112
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:56.058526+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 9109504 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:57.058688+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044347 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 9109504 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:58.058820+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 9109504 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:58:59.058918+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 9109504 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:00.059061+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 8048 writes, 29K keys, 8048 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8048 writes, 1890 syncs, 4.26 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1715 writes, 3702 keys, 1715 commit groups, 1.0 writes per commit group, ingest: 1.94 MB, 0.00 MB/s
                                           Interval WAL: 1715 writes, 725 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 9109504 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 154 heartbeat osd_stat(store_statfs(0x1bbcf4000/0x0/0x1bfc00000, data 0xa433c8/0xb19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 154 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:01.059218+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9101312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:02.059392+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9101312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:03.059512+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9101312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:04.059646+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9101312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:05.059796+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9101312 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc ms_handle_reset ms_handle_reset con 0x5586944fdc00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/510010839
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/510010839,v1:192.168.122.100:6801/510010839]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: get_auth_request con 0x5586977b5800 auth_method 0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:06.059956+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x55869537dc00 session 0x558693e2d680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f2c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558696da0800 session 0x5586975c34a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586956f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558695813c00 session 0x558693e2de00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 8994816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:07.060110+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 8994816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:08.060239+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 8994816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:09.060430+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 8994816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:10.060578+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 8994816 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:11.060710+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:12.060837+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:13.061024+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:14.061150+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.691165924s of 19.142379761s, submitted: 31
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558695813c00 session 0x5586956edc20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:15.061267+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:16.061574+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:17.061733+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:18.061876+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:19.062053+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:20.062211+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:21.062391+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:22.062550+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:23.062739+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:24.062854+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:25.062979+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:26.063115+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:27.063307+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:28.063442+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:29.063578+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:30.063795+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:31.063990+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:32.064170+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:33.064348+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:34.064484+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:35.064628+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:36.064804+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:37.065001+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047321 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:38.065182+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 8986624 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:39.065321+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:40.065435+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:41.065551+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.525215149s of 26.530452728s, submitted: 1
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558696da1800 session 0x55869752cb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:42.065734+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046441 data_alloc: 218103808 data_used: 4898816
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:43.065934+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:44.066077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:45.066250+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:46.066436+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:47.066636+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:48.066766+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:49.066911+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:50.067036+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:51.067194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:52.067367+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:53.067549+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:54.067706+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:55.067876+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:56.068084+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:57.068264+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:58.068407+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T08:59:59.068524+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:00.068653+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:01.068790+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:02.068932+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:03.069069+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:04.069213+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:05.070042+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:06.070161+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:07.070323+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:08.070456+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:09.070606+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:10.071025+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:11.071159+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:12.071388+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:13.071543+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:14.071719+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:15.071873+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:16.072119+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:17.072352+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:18.072522+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:19.072639+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:20.072793+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:21.072927+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:22.073130+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:23.073302+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:24.073448+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:25.073579+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:26.073761+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:27.074089+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:28.074261+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 8978432 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.482311249s of 47.522048950s, submitted: 5
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:29.074411+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 8970240 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:30.074556+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 8929280 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:31.074694+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87154688 unmapped: 8667136 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:32.074966+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 8552448 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:33.075113+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 8552448 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:34.075270+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 8544256 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:35.075444+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 8544256 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:36.075627+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 8544256 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:37.075852+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 8544256 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:38.076038+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 8544256 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:39.076227+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 8536064 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:40.076377+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 8536064 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:41.076505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 8527872 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:42.076638+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 8527872 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:43.076789+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 8527872 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:44.076956+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 8519680 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:45.077091+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 8519680 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:46.077248+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 8519680 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:47.077519+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 8519680 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:48.077705+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 8519680 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:49.077914+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:50.078052+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:51.079574+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:52.079748+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:53.082436+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:54.082613+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:55.083746+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:56.084300+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 8511488 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:57.085085+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:58.085461+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:00:59.085743+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:00.085955+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:01.086601+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:02.086802+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:03.087022+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:04.087402+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 8503296 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:05.087546+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:06.087718+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:07.087932+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:08.088053+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:09.088238+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:10.088353+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:11.088499+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:12.088749+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 8495104 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:13.088976+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 8486912 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:14.089427+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 8486912 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:15.089621+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 8486912 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:16.089791+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 8486912 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:17.090207+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 8486912 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046601 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:18.090393+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 8486912 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:19.090584+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984000 session 0x5586977a05a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87343104 unmapped: 8478720 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984400 session 0x5586977a0b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:20.090735+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87343104 unmapped: 8478720 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:21.090944+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87343104 unmapped: 8478720 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.607799530s of 52.619045258s, submitted: 258
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984800 session 0x5586953cc3c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:22.091092+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 8470528 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048429 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:23.091703+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 8470528 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf1000/0x0/0x1bfc00000, data 0xa44f17/0xb1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:24.091966+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558695813c00 session 0x558697735680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558696da1800 session 0x5586974b81e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 87367680 unmapped: 8454144 heap: 95821824 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984000 session 0x558697534000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:25.092084+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984400 session 0x55869752c000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88604672 unmapped: 11419648 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:26.092297+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 11411456 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:27.092810+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bb32f000/0x0/0x1bfc00000, data 0x1406f16/0x14df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 11411456 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129721 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:28.092988+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 11411456 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:29.093493+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 11403264 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:30.093637+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 11403264 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:31.094127+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984c00 session 0x558695927a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558695813c00 session 0x558695928f00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 11395072 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:32.094273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 11395072 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:33.094650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 11395072 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:34.095017+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 11395072 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:35.095452+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 11395072 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:36.095745+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 11395072 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:37.096008+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:38.096202+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:39.096439+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:40.096680+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:41.096958+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:42.097212+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:43.097354+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:44.097599+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:45.097724+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:46.097846+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:47.098093+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:48.098220+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 11362304 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:49.098573+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:50.098705+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:51.098867+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:52.099090+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:53.099223+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:54.099386+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:55.099588+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:56.099816+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:57.100110+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:58.100315+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:01:59.100597+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:00.100810+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:01.101011+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:02.101151+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:03.101444+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:04.101591+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:05.101724+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:06.101859+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:07.102082+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:08.102233+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:09.102393+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:10.102546+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:11.102749+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:12.355462+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:13.355596+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:14.355774+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:15.355940+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:16.356059+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:17.356231+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:18.356359+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:19.356491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:20.356620+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:21.356746+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:22.356859+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 11354112 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:23.357209+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:24.357359+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:25.357499+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:26.357662+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:27.357830+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:28.357991+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:29.358124+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:30.358249+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:31.358426+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:32.358585+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:33.358713+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:34.358836+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:35.358988+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:36.359119+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:37.359309+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:38.359458+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:39.359614+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:40.359746+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:41.359939+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:42.360171+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:43.360333+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:44.360585+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:45.360798+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:46.360957+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:47.361195+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:48.361384+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:49.361574+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:50.361867+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:51.362131+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:52.362292+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:53.362457+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:54.362750+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:55.363063+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:56.363228+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:57.363451+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:58.363657+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:02:59.363842+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:00.363984+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:01.364264+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:02.364494+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:03.364653+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:04.364959+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 11345920 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:05.365163+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:06.365464+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:07.365631+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:08.365743+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:09.366115+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:10.433836+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:11.434055+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:12.434324+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:13.434554+0000)
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27740 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:14.434745+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:15.434980+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:16.435230+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:17.435447+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:18.435619+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:19.435770+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:20.435938+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:21.436067+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:22.436664+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:23.436799+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:24.436932+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:25.437058+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:26.437188+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:27.437342+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:28.437473+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:29.437625+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:30.437764+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:31.437922+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:32.438104+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:33.438997+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:34.439425+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:35.440210+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:36.440405+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:37.441280+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:38.441383+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:39.441599+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:40.442242+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:41.442823+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558696da1800 session 0x558696d390e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:42.443130+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984000 session 0x558694900b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:43.443534+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 142.298202515s of 142.434219360s, submitted: 37
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984400 session 0x5586949001e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:44.443871+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:45.444185+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:46.444608+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697985000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:47.445003+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bbcf2000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697985000 session 0x5586975c2000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 11640832 heap: 100024320 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:48.445285+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052467 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 15925248 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:49.445721+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558695813c00 session 0x5586977c9e00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558696da1800 session 0x558696b4c3c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 15925248 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:50.446404+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba184000/0x0/0x1bfc00000, data 0x1411f69/0x14ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 15892480 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:51.446571+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 15876096 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:52.446854+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984000 session 0x55869752cf00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba184000/0x0/0x1bfc00000, data 0x1411f69/0x14ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 15843328 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:53.447150+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130680 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 15843328 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:54.447517+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.741862297s of 11.164292336s, submitted: 41
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 15843328 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:55.448064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba184000/0x0/0x1bfc00000, data 0x1411f69/0x14ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984400 session 0x55869752d0e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:56.448330+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:57.448558+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:58.448765+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:03:59.449008+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:00.449311+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:01.449573+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:02.449806+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:03.450042+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:04.450204+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:05.450398+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:06.450617+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:07.450831+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:08.451040+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:09.451288+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:10.451584+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:11.451812+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:12.451998+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:13.452170+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:14.452354+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:15.452538+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:16.452838+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:17.453096+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:18.453239+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:19.453371+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:20.453570+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:21.453765+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:22.453949+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:23.454135+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:24.454338+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:25.454547+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:26.454755+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:27.455003+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:28.455192+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 15835136 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:29.455371+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:30.455545+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:31.455699+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:32.456035+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:33.456236+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133120 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:34.456423+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:35.456608+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:36.456809+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:37.456994+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:38.457122+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 15826944 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697985400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.961364746s of 43.434341431s, submitted: 3
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697985400 session 0x5586967ea5a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133277 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba183000/0x0/0x1bfc00000, data 0x1411fcb/0x14eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:39.457316+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91561984 unmapped: 15810560 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558695813c00 session 0x558697ed6780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:40.457579+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 15777792 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:41.457800+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba184000/0x0/0x1bfc00000, data 0x1411f69/0x14ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 15777792 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558696da1800 session 0x5586977a14a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1ba184000/0x0/0x1bfc00000, data 0x1411f69/0x14ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:42.458064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 15794176 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984000 session 0x5586967ebe00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:43.458249+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 15745024 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060334 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:44.458372+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 15745024 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:45.458586+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 15745024 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:46.458829+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 15745024 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:47.459057+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 15745024 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:48.459223+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 15745024 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060334 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:49.459427+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:50.459616+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:51.459846+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:52.460055+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:53.460231+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060334 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:54.460361+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:55.460563+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:56.460755+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:57.461038+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:58.461240+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 15736832 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060334 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:04:59.461441+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:00.461642+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:01.461858+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:02.462121+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:03.462346+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 27 09:22:32 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555584008' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060334 data_alloc: 218103808 data_used: 4902912
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:04.462539+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:05.462710+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 heartbeat osd_stat(store_statfs(0x1bab51000/0x0/0x1bfc00000, data 0xa44f07/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:06.463013+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.358110428s of 28.102293015s, submitted: 52
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 ms_handle_reset con 0x558697984400 session 0x5586968514a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:07.463267+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697985800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 156 ms_handle_reset con 0x558697985800 session 0x558697718b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:08.463491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066127 data_alloc: 218103808 data_used: 4911104
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 156 ms_handle_reset con 0x558695813c00 session 0x558696d392c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:09.463711+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 ms_handle_reset con 0x558696da1800 session 0x558696d39e00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:10.463852+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 heartbeat osd_stat(store_statfs(0x1bab4a000/0x0/0x1bfc00000, data 0xa4886f/0xb23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:11.463985+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 15728640 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:12.464179+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 ms_handle_reset con 0x558697984000 session 0x558697720960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:13.464354+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067334 data_alloc: 218103808 data_used: 4911104
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:14.464505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:15.464624+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:16.464816+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 heartbeat osd_stat(store_statfs(0x1bab4c000/0x0/0x1bfc00000, data 0xa4880d/0xb22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:17.464974+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:18.465116+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067334 data_alloc: 218103808 data_used: 4911104
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.541372299s of 12.399555206s, submitted: 36
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697985c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 ms_handle_reset con 0x558697984400 session 0x55869774eb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 ms_handle_reset con 0x558697985c00 session 0x558696752780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:19.465255+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 ms_handle_reset con 0x558695813c00 session 0x5586975c2d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 15704064 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:20.465396+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 15695872 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x558696da1800 session 0x55869779ef00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:21.465562+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1bab48000/0x0/0x1bfc00000, data 0xa4a34c/0xb25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 15695872 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683f000 session 0x5586977185a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683ec00 session 0x55869779f0e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:22.465754+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 14630912 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:23.465979+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 14630912 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073185 data_alloc: 218103808 data_used: 4919296
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:24.466129+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 14630912 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683e800 session 0x558694c3cd20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:25.466378+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 14647296 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683e000 session 0x5586959261e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:26.466527+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683e400 session 0x5586949005a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 18022400 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683f000 session 0x5586968503c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:27.466747+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba568000/0x0/0x1bfc00000, data 0x102a35c/0x1106000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 18022400 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:28.466937+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 18022400 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683ec00 session 0x5586967ea780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119169 data_alloc: 218103808 data_used: 4919296
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:29.467088+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 17965056 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.087244034s of 11.227088928s, submitted: 53
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:30.467214+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 ms_handle_reset con 0x55869683e800 session 0x5586977c8000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 17956864 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:31.467377+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 17956864 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:32.467540+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:33.467689+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120997 data_alloc: 218103808 data_used: 4919296
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:34.467842+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:35.468028+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:36.468200+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:37.468505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:38.468671+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120997 data_alloc: 218103808 data_used: 4919296
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:39.468813+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:40.469028+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 17948672 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:41.469203+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:42.469370+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:43.469759+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120997 data_alloc: 218103808 data_used: 4919296
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:44.470563+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:45.470684+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:46.470925+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:47.471126+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:48.471273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121317 data_alloc: 218103808 data_used: 4927488
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:49.471444+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:50.471575+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:51.471674+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:52.471818+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 17940480 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:53.471965+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba566000/0x0/0x1bfc00000, data 0x102a3ce/0x1108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 17932288 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121317 data_alloc: 218103808 data_used: 4927488
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:54.472098+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 17932288 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:55.472217+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 17932288 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:56.472365+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.371683121s of 26.604547501s, submitted: 2
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 17907712 heap: 111050752 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:57.472593+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 heartbeat osd_stat(store_statfs(0x1ba564000/0x0/0x1bfc00000, data 0x102a3fc/0x110a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93208576 unmapped: 26238976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:58.472761+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 158 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93077504 unmapped: 26370048 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185342 data_alloc: 218103808 data_used: 4935680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:05:59.472937+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 159 ms_handle_reset con 0x55869683e000 session 0x5586967eaf00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:00.473064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:01.473183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 159 heartbeat osd_stat(store_statfs(0x1b9d5f000/0x0/0x1bfc00000, data 0x182c07d/0x190e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:02.473315+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:03.473451+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185286 data_alloc: 218103808 data_used: 4935680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:04.473599+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:05.473735+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 159 heartbeat osd_stat(store_statfs(0x1b9d5f000/0x0/0x1bfc00000, data 0x182c07d/0x190e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:06.473875+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.316424370s of 10.245281219s, submitted: 21
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:07.474054+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 26501120 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:08.474204+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 26501120 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186913 data_alloc: 218103808 data_used: 4947968
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:09.474348+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 160 heartbeat osd_stat(store_statfs(0x1b9d5d000/0x0/0x1bfc00000, data 0x182dd07/0x1910000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 26476544 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:10.483566+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 160 ms_handle_reset con 0x55869683e400 session 0x5586967e4b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 160 heartbeat osd_stat(store_statfs(0x1ba55e000/0x0/0x1bfc00000, data 0x102dcd4/0x110e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:11.483741+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 26468352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 160 heartbeat osd_stat(store_statfs(0x1ba55e000/0x0/0x1bfc00000, data 0x102dcd4/0x110e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:12.483894+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 26468352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:13.483972+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 26460160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137277 data_alloc: 218103808 data_used: 4943872
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:14.484114+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 26460160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 161 ms_handle_reset con 0x55869683ec00 session 0x558695928000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:15.484242+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 26460160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:16.484401+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 26460160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 162 heartbeat osd_stat(store_statfs(0x1ba556000/0x0/0x1bfc00000, data 0x10314bb/0x1116000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:17.484596+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 26443776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:18.485836+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 26443776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144353 data_alloc: 218103808 data_used: 4952064
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:19.485999+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 26443776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 162 ms_handle_reset con 0x55869683f000 session 0x5586976e4b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.367312431s of 12.806289673s, submitted: 64
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 162 ms_handle_reset con 0x558695813c00 session 0x558696d394a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:20.486107+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 26427392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 162 heartbeat osd_stat(store_statfs(0x1ba559000/0x0/0x1bfc00000, data 0x10314ab/0x1115000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:21.486260+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 26427392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 ms_handle_reset con 0x55869683e000 session 0x558697582780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:22.486436+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 26353664 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 ms_handle_reset con 0x55869683e400 session 0x5586974f1680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:23.486558+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 26345472 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 heartbeat osd_stat(store_statfs(0x1ba558000/0x0/0x1bfc00000, data 0x1033125/0x1116000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142629 data_alloc: 218103808 data_used: 4964352
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:24.486691+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 26345472 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 ms_handle_reset con 0x55869683ec00 session 0x55869779fe00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 ms_handle_reset con 0x55869683f000 session 0x55869779ef00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:25.486830+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 26320896 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:26.486976+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 26320896 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:27.487186+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 26320896 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:28.487338+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 26320896 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100739 data_alloc: 218103808 data_used: 4964352
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:29.487487+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 heartbeat osd_stat(store_statfs(0x1bab3a000/0x0/0x1bfc00000, data 0xa530b3/0xb34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 26320896 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:30.487657+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 26320896 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.496312141s of 10.949060440s, submitted: 77
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:31.487783+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:32.487951+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:33.488064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696da1800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558696da1800 session 0x5586953cc1e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104913 data_alloc: 218103808 data_used: 4972544
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:34.488216+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab36000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:35.488338+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:36.488713+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:37.488873+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:38.489010+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104913 data_alloc: 218103808 data_used: 4972544
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:39.489130+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab36000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab36000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:40.489255+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:41.489384+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:42.489520+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab36000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:43.489693+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104913 data_alloc: 218103808 data_used: 4972544
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:44.489837+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:45.490028+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:46.490174+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:47.490742+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab36000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:48.490937+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.213720322s of 18.229408264s, submitted: 16
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:49.491185+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x55869774f860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:50.491353+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:51.491986+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:52.492135+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:53.492324+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:54.492468+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:55.492638+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 26312704 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:56.492763+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:57.492946+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:58.493083+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:06:59.493298+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:00.493611+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:01.493764+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:02.493906+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:03.494171+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:04.494423+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:05.494578+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:06.494827+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:07.495114+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:08.495368+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:09.495505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:10.495655+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:11.495807+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:12.495952+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:13.496085+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:14.496216+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:15.496467+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:16.496651+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:17.496867+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:18.497037+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:19.497776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:20.497942+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:21.498058+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:22.498179+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:23.498317+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:24.498546+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:25.498790+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:26.498941+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:27.499110+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:28.499237+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:29.499369+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:30.499488+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:31.499631+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:32.499795+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:33.499965+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:34.500133+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:35.500284+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:36.500444+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:37.500605+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:38.500770+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:39.500942+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:40.501077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:41.501232+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:42.501394+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:43.501555+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:44.501696+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:45.501847+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:46.502002+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:47.502207+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:48.502326+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:49.502498+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104513 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x558697748d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:50.502692+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x558696d39a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:51.503058+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:52.503194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.608505249s of 63.629333496s, submitted: 5
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x5586977c9c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558697984000 session 0x558693e2da40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:53.503504+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:54.503764+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107378 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:55.503980+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 26304512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x558696d38d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:56.504094+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 93208576 unmapped: 26238976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x558696b3d860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:57.504257+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1bab37000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94363648 unmapped: 25083904 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x558696d421e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x558694c3cb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558697984400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558697984400 session 0x558697748000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:58.504403+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 25075712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:07:59.505226+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94371840 unmapped: 25075712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:00.505354+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94380032 unmapped: 25067520 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:01.505484+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 25059328 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:02.505634+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 25059328 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:03.505776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 25059328 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:04.505968+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:05.506116+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:06.506303+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:07.506739+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:08.506870+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:09.507028+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:10.507188+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:11.507376+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:12.507542+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:13.507757+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:14.508001+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 25051136 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:15.508239+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:16.508425+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:17.508689+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:18.508844+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:19.509064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:20.509237+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:21.509456+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:22.509607+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:23.509778+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:24.509936+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 25034752 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:25.510145+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 25034752 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:26.510343+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 25034752 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:27.510638+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 25034752 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:28.510824+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 25034752 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:29.510980+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240870 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 25034752 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:30.511134+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x55869779f860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 25018368 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x5586977a1a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:31.511335+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 25018368 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:32.511597+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94437376 unmapped: 25010176 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:33.511875+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.407176971s of 40.790180206s, submitted: 129
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x5586967ea3c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x558695842d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:34.512088+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239971 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:35.512381+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6d000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6d000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:36.512585+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:37.512822+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586964f3c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:38.512953+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x5586964f3c00 session 0x55869773cb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 25042944 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:39.513108+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278878 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x5586975ad2c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:40.513266+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b95a1000/0x0/0x1bfc00000, data 0x1fe9c02/0x20cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:41.513491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:42.513657+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:43.513816+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b95a1000/0x0/0x1bfc00000, data 0x1fe9c02/0x20cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:44.513950+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278878 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:45.514121+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:46.514277+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:47.514490+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:48.514663+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:49.514816+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278878 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b95a1000/0x0/0x1bfc00000, data 0x1fe9c02/0x20cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:50.515006+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.400892258s of 17.675216675s, submitted: 90
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:51.515263+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x55869752c3c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 25157632 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:52.515467+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 25157632 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:53.515672+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 25157632 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1b9a6e000/0x0/0x1bfc00000, data 0x1b1cc02/0x1c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:54.516365+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x558696d38960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242030 data_alloc: 218103808 data_used: 4988928
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94314496 unmapped: 25133056 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:55.516542+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x5586967521e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:56.516689+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:57.516873+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:58.517073+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:08:59.517225+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119142 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:00.517441+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 33K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2788 syncs, 3.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2012 writes, 4405 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 1.96 MB, 0.00 MB/s
                                           Interval WAL: 2012 writes, 898 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:01.517622+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 25001984 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:02.517822+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 25001984 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:03.517959+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558698f35c00 session 0x558695935a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 25001984 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x558697534b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:04.518125+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119142 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 24993792 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:05.518314+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.123113632s of 14.682233810s, submitted: 210
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x5586977c8960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 24993792 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:06.518459+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x5586953cde00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 24993792 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:07.518622+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x5586977194a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:08.518747+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558698f35800 session 0x558696d38b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x5586974b9860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x5586977c94a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:09.518941+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x5586968012c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x558696d42b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174683 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94134272 unmapped: 25313280 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:10.519076+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba03e000/0x0/0x1bfc00000, data 0x113cc54/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 25280512 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:11.519252+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba03e000/0x0/0x1bfc00000, data 0x113cc54/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 25247744 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:12.519968+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 25247744 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:13.520108+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558698f35400 session 0x5586977641e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 25255936 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:14.520247+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba03e000/0x0/0x1bfc00000, data 0x113cc54/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173867 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 25255936 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:15.520387+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x558697718000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 25255936 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba03e000/0x0/0x1bfc00000, data 0x113cc54/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.634387970s of 10.466399193s, submitted: 72
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:16.520547+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:17.520733+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x5586977021e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:18.520955+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x55869773de00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:19.521116+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:20.521257+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:21.521407+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:22.521545+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:23.521721+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:24.521963+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:25.522107+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:26.522232+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:27.522400+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:28.522536+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:29.522763+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:30.522916+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:31.523089+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:32.523250+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:33.523432+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:34.523591+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:35.523739+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:36.523944+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:37.524174+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:38.524367+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:39.524577+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:40.524788+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 25231360 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:41.524972+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:42.525132+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:43.525273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:44.525444+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:45.525641+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:46.525791+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:47.525979+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:48.526134+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:49.526261+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:50.526411+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 25223168 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:51.526568+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 25214976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:52.526766+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 25206784 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:53.526980+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:54.527134+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:55.527328+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:56.527474+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:57.527728+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:58.527898+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:09:59.528099+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:00.528309+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:01.528448+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:02.528599+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:03.528735+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:04.528970+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 25198592 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:05.529196+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:06.529311+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:07.529512+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:08.529716+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:09.529942+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:10.530140+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:11.530380+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:12.530560+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:13.530699+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:14.530843+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125055 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 25190400 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:15.530991+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:16.531111+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:17.531345+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:18.531467+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:19.531605+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.725551605s of 63.539310455s, submitted: 22
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126707 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:20.531742+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:21.531938+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x558697765860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:22.532118+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:23.532270+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:24.532430+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126707 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:25.532637+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54c02/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:26.532862+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:27.533113+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54c02/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 25182208 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:28.533273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54c02/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94273536 unmapped: 25174016 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:29.533453+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.183951378s of 10.248233795s, submitted: 20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126603 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 25403392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:30.533584+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54c02/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94240768 unmapped: 25206784 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:31.533692+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 25018368 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:32.533826+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 25018368 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:33.533996+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 25018368 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:34.534156+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54c02/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126531 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 25018368 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:35.534280+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558698f35000 session 0x5586977354a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x5586974f0d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 24993792 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:36.534410+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba726000/0x0/0x1bfc00000, data 0xa54c02/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 24993792 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:37.534534+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:38.534682+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:39.534834+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:40.534940+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:41.535074+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:42.535230+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:43.535410+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:44.535582+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 24985600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:45.535699+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:46.535865+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:47.536084+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:48.549677+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:49.549853+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:50.549979+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:51.550108+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:52.550279+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 24977408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:53.550399+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:54.550550+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:55.550685+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:56.550835+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 24969216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:57.551048+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 24961024 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:58.551209+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 24961024 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:10:59.551369+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:00.551525+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:01.551659+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:02.551981+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:03.552125+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:04.552333+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:05.552513+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:06.552776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:07.553029+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 24952832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:08.553255+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:09.553451+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:10.553598+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:11.553815+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:12.553981+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:13.554119+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:14.554285+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:15.554426+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:16.554572+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:17.554823+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:18.554950+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:19.555091+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:20.555252+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:21.555491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:22.555645+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:23.555800+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:24.555968+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:25.556100+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:26.556227+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:27.556494+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:28.556665+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:29.556810+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:30.556980+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:31.557194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:32.557455+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:33.557666+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94502912 unmapped: 24944640 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:34.557862+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:35.558220+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:36.558675+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:37.558953+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:38.559097+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:39.559603+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:40.559848+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:41.560166+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:42.560454+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 24936448 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:43.561427+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 24928256 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:44.561750+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 24928256 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:45.562366+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 24928256 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:46.562515+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:47.562950+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:48.563077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:49.563200+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:50.563467+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:51.563787+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:52.564042+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:53.564304+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:54.564452+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:55.564742+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:56.564987+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:57.565214+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:58.565403+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:11:59.565594+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 24920064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:00.565744+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:01.565945+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:02.566102+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:03.566363+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:04.566581+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:05.566742+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:06.566905+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:07.567133+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:08.567325+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:10.813335+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:11.813458+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:12.813588+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 24911872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:13.813707+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:14.814023+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:15.814153+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.697700500s of 105.603942871s, submitted: 246
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:16.814318+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x5586977343c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:17.814531+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:18.814651+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:19.814796+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:20.814939+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:21.815081+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:22.815214+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:23.815410+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:24.815649+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 24903680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:25.815776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:26.815921+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.935767174s of 10.939405441s, submitted: 1
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x558696ae8d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:27.816074+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:28.816200+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:29.816434+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:30.816637+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:31.817077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:32.817214+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:33.817406+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:34.817526+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x5586976e54a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:35.817658+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125811 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558698f34c00 session 0x558695942b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:36.817789+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:37.817931+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94552064 unmapped: 24895488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.899839401s of 11.917885780s, submitted: 5
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e000 session 0x558697764b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:38.818115+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 24887296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:39.818359+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 24887296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba727000/0x0/0x1bfc00000, data 0xa54bf2/0xb37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:40.818490+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127285 data_alloc: 218103808 data_used: 4984832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 24887296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683e400 session 0x55869752c5a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:41.818732+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683ec00 session 0x558696b134a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95322112 unmapped: 24125440 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x55869683f000 session 0x5586967534a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 ms_handle_reset con 0x558698f34800 session 0x558694aeeb40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:42.818936+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 24084480 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 heartbeat osd_stat(store_statfs(0x1ba4b4000/0x0/0x1bfc00000, data 0xcc7bf2/0xdaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:43.819104+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 24076288 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 165 ms_handle_reset con 0x55869683e000 session 0x5586967e4f00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:44.819231+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:45.819396+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158663 data_alloc: 218103808 data_used: 4993024
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:46.819562+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:47.819778+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 165 heartbeat osd_stat(store_statfs(0x1ba4b0000/0x0/0x1bfc00000, data 0xcc984b/0xdad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:48.819926+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:49.820047+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:50.820174+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158663 data_alloc: 218103808 data_used: 4993024
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 24051712 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:51.820297+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 24043520 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.086971283s of 13.694534302s, submitted: 52
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 165 heartbeat osd_stat(store_statfs(0x1ba4b0000/0x0/0x1bfc00000, data 0xcc984b/0xdad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:52.820448+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 22962176 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 165 ms_handle_reset con 0x55869683ec00 session 0x558695934d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:53.820817+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 22962176 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 165 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683f000 session 0x558694aee3c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683e400 session 0x5586977025a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:54.821010+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x558698f34400 session 0x558694610780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 22953984 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683e000 session 0x558695942960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683e400 session 0x5586975a52c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683ec00 session 0x5586977492c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:55.821166+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145255 data_alloc: 218103808 data_used: 5001216
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96501760 unmapped: 22945792 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683f000 session 0x5586976e4960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696953000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x558696953000 session 0x558695942d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683e000 session 0x55869773c1e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:56.821288+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 heartbeat osd_stat(store_statfs(0x1ba71c000/0x0/0x1bfc00000, data 0xa586ee/0xb41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683e400 session 0x5586956ec1e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 22937600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:57.821491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 22937600 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:58.821589+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96518144 unmapped: 22929408 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683ec00 session 0x55869774e780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683f000 session 0x558694b05c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696953000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:12:59.821696+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x558696953000 session 0x55869773c960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 22921216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 ms_handle_reset con 0x55869683e000 session 0x558697534960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 heartbeat osd_stat(store_statfs(0x1ba71b000/0x0/0x1bfc00000, data 0xa586fe/0xb42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:00.821920+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150743 data_alloc: 218103808 data_used: 5005312
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 22921216 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e400 session 0x558697702960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x5586948fa960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683f000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f35800 session 0x5586957f8b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:01.822063+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683f000 session 0x558697583a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96542720 unmapped: 22904832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1ba716000/0x0/0x1bfc00000, data 0xa5a25d/0xb47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:02.822194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96542720 unmapped: 22904832 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.297000885s of 10.873082161s, submitted: 106
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x5586948fa3c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e400 session 0x5586977a1680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x558697734f00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:03.822331+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f35800 session 0x558697734d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 22880256 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34800 session 0x5586975350e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34c00 session 0x5586975c2f00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:04.822455+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34800 session 0x558696b4dc20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x5586946245a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 22192128 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x5586967ea000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e400 session 0x558696b4d4a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x558697720000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:05.822607+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x558694b594a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298939 data_alloc: 218103808 data_used: 5013504
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 22011904 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:06.822760+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34800 session 0x5586975ac1e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 22142976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34c00 session 0x558697583e00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b95c8000/0x0/0x1bfc00000, data 0x1ba7321/0x1c96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f35800 session 0x558694b05860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:07.822935+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97402880 unmapped: 22044672 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x558696d43e00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x558697702b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:08.823183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34800 session 0x558696d43a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 21807104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:09.823344+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 21807104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:10.823527+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416974 data_alloc: 218103808 data_used: 5013504
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 21807104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:11.823680+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 21798912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b86dc000/0x0/0x1bfc00000, data 0x2a942bf/0x2b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:12.826528+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 21798912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34c00 session 0x5586953cc000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f35800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.730232239s of 10.475429535s, submitted: 140
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:13.828505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f35800 session 0x558696b12b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 22347776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b86dc000/0x0/0x1bfc00000, data 0x2a942af/0x2b81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:14.829295+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b8e0e000/0x0/0x1bfc00000, data 0x23632af/0x2450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 22339584 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b8e0e000/0x0/0x1bfc00000, data 0x23632af/0x2450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:15.829491+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361668 data_alloc: 218103808 data_used: 5013504
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 22339584 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:16.830974+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 22339584 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:17.831220+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 22339584 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:18.831931+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 22339584 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x558696d385a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:19.832158+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b9578000/0x0/0x1bfc00000, data 0x1bf92af/0x1ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 22355968 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:20.832380+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307722 data_alloc: 218103808 data_used: 5013504
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 22347776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x558696b12960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:21.833224+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34800 session 0x5586975adc20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 22364160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34c00 session 0x5586967e7a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558695813c00 session 0x55869752d2c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:22.833459+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34400 session 0x5586975825a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 22347776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558695813c00 session 0x5586975c2b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:23.833652+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x558694c39e00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 22347776 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.160103798s of 10.644664764s, submitted: 65
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x558694610f00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1ba718000/0x0/0x1bfc00000, data 0xa5a23d/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:24.833827+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34800 session 0x558696752b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97140736 unmapped: 22306816 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558695813c00 session 0x5586974f1c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:25.834017+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x5586974b8000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182399 data_alloc: 218103808 data_used: 5013504
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 22290432 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:26.834160+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x558696801a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34400 session 0x558696801680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 22282240 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:27.834359+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34c00 session 0x558696851c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 97140736 unmapped: 22306816 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558695813c00 session 0x558694b05680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:28.834473+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x5586949014a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683ec00 session 0x5586969872c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 21381120 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1ba0e5000/0x0/0x1bfc00000, data 0x108f22d/0x1179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:29.834616+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558698f34400 session 0x5586977201e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 21397504 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:30.834754+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233018 data_alloc: 218103808 data_used: 5013504
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 21397504 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696d5b800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:31.834908+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558696d5b800 session 0x5586974f1a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 21389312 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:32.835056+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x558695813c00 session 0x5586974b85a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 21389312 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 ms_handle_reset con 0x55869683e000 session 0x5586975a4000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:33.835177+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 21364736 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 heartbeat osd_stat(store_statfs(0x1ba0e6000/0x0/0x1bfc00000, data 0x108f1cb/0x1178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:34.835313+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918402672s of 10.674113274s, submitted: 121
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 21364736 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:35.835456+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 ms_handle_reset con 0x55869683ec00 session 0x5586975acd20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233582 data_alloc: 218103808 data_used: 5017600
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98107392 unmapped: 21340160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:36.835650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98107392 unmapped: 21340160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698f34400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 ms_handle_reset con 0x558698f34400 session 0x5586976e5a40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696d5a400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 heartbeat osd_stat(store_statfs(0x1ba0e3000/0x0/0x1bfc00000, data 0x1090e45/0x1179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 ms_handle_reset con 0x558696d5a400 session 0x5586975830e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:37.835939+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:38.836703+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:39.836857+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:40.837036+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183108 data_alloc: 218103808 data_used: 5017600
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 heartbeat osd_stat(store_statfs(0x1ba71b000/0x0/0x1bfc00000, data 0xa5be36/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:41.837157+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:42.837291+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:43.837418+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:44.837557+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 heartbeat osd_stat(store_statfs(0x1ba71b000/0x0/0x1bfc00000, data 0xa5be36/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:45.837705+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183588 data_alloc: 218103808 data_used: 5029888
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 21307392 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.953874588s of 11.257008553s, submitted: 60
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba71b000/0x0/0x1bfc00000, data 0xa5be36/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:46.837836+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d975/0xb46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:47.838037+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d975/0xb46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:48.838177+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d975/0xb46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:49.838287+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:50.838456+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187762 data_alloc: 218103808 data_used: 5038080
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d975/0xb46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:51.838598+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:52.838768+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:53.838968+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:54.839109+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x558695813c00 session 0x55869774f680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:55.839262+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d9d7/0xb47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188664 data_alloc: 218103808 data_used: 5038080
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 21299200 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:56.839469+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d9d7/0xb47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98156544 unmapped: 21291008 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:57.839617+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98156544 unmapped: 21291008 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:58.839747+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98164736 unmapped: 21282816 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:13:59.839895+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.732316971s of 13.751753807s, submitted: 16
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x55869683e000 session 0x5586975a5c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98181120 unmapped: 21266432 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets getting new tickets!
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:00.840165+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _finish_auth 0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:00.841110+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187791 data_alloc: 218103808 data_used: 5038080
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98189312 unmapped: 21258240 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:01.840308+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98189312 unmapped: 21258240 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba718000/0x0/0x1bfc00000, data 0xa5d975/0xb46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:02.840431+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98189312 unmapped: 21258240 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:03.840564+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98189312 unmapped: 21258240 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:04.840738+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98189312 unmapped: 21258240 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc ms_handle_reset ms_handle_reset con 0x5586977b5800
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/510010839
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/510010839,v1:192.168.122.100:6801/510010839]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: get_auth_request con 0x558696d5b800 auth_method 0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:05.840929+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba718000/0x0/0x1bfc00000, data 0xa5d975/0xb46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x5586956f2c00 session 0x5586975c21e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683ec00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x5586956f3c00 session 0x558694c39860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558696d5a400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x55869582b000 session 0x5586948fbc20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187791 data_alloc: 218103808 data_used: 5038080
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 21159936 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:06.841043+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 21159936 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:07.841221+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 21159936 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x55869582b000 session 0x558697718780
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869537dc00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:08.841349+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x55869537dc00 session 0x5586959425a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98304000 unmapped: 21143552 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:09.859139+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98304000 unmapped: 21143552 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:10.859671+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698415c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.771538734s of 10.803625107s, submitted: 12
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x558698415c00 session 0x558694aefe00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191047 data_alloc: 218103808 data_used: 5038080
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98328576 unmapped: 21118976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:11.859849+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d9d7/0xb47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98328576 unmapped: 21118976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:12.859986+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98328576 unmapped: 21118976 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869537dc00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba717000/0x0/0x1bfc00000, data 0xa5d9d7/0xb47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x55869537dc00 session 0x558697582b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:13.860129+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x558695813c00 session 0x558696b4d680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 ms_handle_reset con 0x55869582b000 session 0x558696938f00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 20570112 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:14.860302+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 20570112 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:15.860473+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba19c000/0x0/0x1bfc00000, data 0xfd9975/0x10c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241047 data_alloc: 218103808 data_used: 5038080
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 20570112 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:16.860656+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 20570112 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba19c000/0x0/0x1bfc00000, data 0xfd9975/0x10c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:17.860852+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba19c000/0x0/0x1bfc00000, data 0xfd9975/0x10c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 20570112 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:18.860987+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 heartbeat osd_stat(store_statfs(0x1ba19c000/0x0/0x1bfc00000, data 0xfd9975/0x10c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98918400 unmapped: 20529152 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:19.861147+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 171 ms_handle_reset con 0x55869683e000 session 0x55869752da40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 20496384 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:20.861273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.164933205s of 10.434926033s, submitted: 64
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256297 data_alloc: 218103808 data_used: 5046272
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 20471808 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698414000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698414400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 172 ms_handle_reset con 0x558698414400 session 0x5586975a4b40
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:21.861438+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 172 heartbeat osd_stat(store_statfs(0x1ba02f000/0x0/0x1bfc00000, data 0x1140f65/0x122f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 18628608 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 173 ms_handle_reset con 0x558698414000 session 0x5586956ecd20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:22.861596+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 18628608 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698414400
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 173 ms_handle_reset con 0x558698414400 session 0x5586975a50e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 173 heartbeat osd_stat(store_statfs(0x1ba02a000/0x0/0x1bfc00000, data 0x1142c3c/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:23.861839+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 18587648 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:24.861988+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869537dc00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 173 ms_handle_reset con 0x558695813c00 session 0x5586956ed860
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 102277120 unmapped: 17170432 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 173 ms_handle_reset con 0x55869582b000 session 0x5586953ccf00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:25.862194+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 ms_handle_reset con 0x55869683e000 session 0x5586946243c0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558695813c00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 ms_handle_reset con 0x558695813c00 session 0x558696939c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304385 data_alloc: 218103808 data_used: 5062656
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 17629184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 ms_handle_reset con 0x55869537dc00 session 0x55869773dc20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:26.862347+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 18276352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:27.862546+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 18276352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 heartbeat osd_stat(store_statfs(0x1b9e9b000/0x0/0x1bfc00000, data 0x12cf92f/0x13c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:28.862682+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 18276352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:29.862849+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 heartbeat osd_stat(store_statfs(0x1b9e9b000/0x0/0x1bfc00000, data 0x12cf92f/0x13c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 18276352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:30.862999+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304253 data_alloc: 218103808 data_used: 5062656
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 18276352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:31.863176+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 heartbeat osd_stat(store_statfs(0x1b9e9b000/0x0/0x1bfc00000, data 0x12cf92f/0x13c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 18276352 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:32.863329+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 18268160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:33.863479+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 18268160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:34.863623+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 18268160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:35.863807+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304253 data_alloc: 218103808 data_used: 5062656
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 18268160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:36.863951+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 18268160 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 heartbeat osd_stat(store_statfs(0x1b9e9b000/0x0/0x1bfc00000, data 0x12cf92f/0x13c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:37.864123+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.549819946s of 16.849325180s, submitted: 116
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:38.864309+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 175 ms_handle_reset con 0x55869582b000 session 0x5586948fa960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698414000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 175 ms_handle_reset con 0x558698414000 session 0x5586975a45a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 175 ms_handle_reset con 0x55869683e000 session 0x55869696d680
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _renew_subs
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:39.864752+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 176 ms_handle_reset con 0x55869683e000 session 0x5586975a41e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:40.864909+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230985 data_alloc: 218103808 data_used: 5070848
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:41.865096+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 176 heartbeat osd_stat(store_statfs(0x1ba2f1000/0x0/0x1bfc00000, data 0xa6a325/0xb5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:42.865377+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:43.865531+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 176 heartbeat osd_stat(store_statfs(0x1ba2f1000/0x0/0x1bfc00000, data 0xa6a325/0xb5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 176 heartbeat osd_stat(store_statfs(0x1ba2f1000/0x0/0x1bfc00000, data 0xa6a325/0xb5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:44.865725+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:45.865869+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:46.866030+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:47.866189+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:48.866368+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:49.866504+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:50.866650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:51.867325+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:52.867492+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:53.867622+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:54.867735+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:55.867929+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:56.868092+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:57.868238+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:58.868368+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:14:59.868523+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:00.868681+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:01.868824+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:02.868946+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:03.869079+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:04.869180+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:05.869331+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:06.869480+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:07.869647+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:08.869780+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:09.869975+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:10.870088+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:11.870258+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:12.870386+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:13.884111+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:14.884241+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:15.884375+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:16.884535+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:17.884808+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:18.885081+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:19.885255+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 18702336 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:20.885411+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:21.885612+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:22.885771+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:23.885934+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:24.886056+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:25.886175+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:26.886421+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:27.886615+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:28.886733+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:29.886869+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:30.887020+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:31.887157+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:32.887270+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:33.887439+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:34.887568+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:35.887716+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:36.887839+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:37.887982+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:38.888100+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:39.888256+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:40.888436+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:41.888585+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:42.888706+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100769792 unmapped: 18677760 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:43.888985+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:44.889140+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:45.889273+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:46.889459+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:47.889622+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:48.889752+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:49.889875+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:50.890035+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:51.890157+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:52.890278+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:53.890427+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:54.891206+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:55.891748+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:56.892158+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:57.892503+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:58.892690+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:15:59.892846+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:00.893011+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:01.893472+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:02.893708+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:03.894074+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:04.894376+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:05.894632+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234279 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:06.894869+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:07.895149+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x55869582b000 session 0x55869752cd20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698414000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:08.895289+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x558698414000 session 0x5586977494a0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:09.895515+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698e70000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 91.670463562s of 92.111465454s, submitted: 147
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x558698e70000 session 0x55869779ed20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:10.895701+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6beac/0xb60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236701 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:11.895904+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b2000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:12.896056+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x5586977b2000 session 0x558697735c20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869582b000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:13.896243+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ef000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x55869582b000 session 0x558694900d20
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x55869683e000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:14.896540+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 14786560 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x55869683e000 session 0x5586948fbe00
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x5586977b2000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x5586977b2000 session 0x5586977a0960
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2a5000/0x0/0x1bfc00000, data 0xab3ed5/0xba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:15.896827+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 18538496 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270079 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:16.897071+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 18538496 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:17.897307+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 18538496 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1b9f0b000/0x0/0x1bfc00000, data 0xe4df0e/0xf43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:18.897499+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 18530304 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:19.897721+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 18530304 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1b9f0b000/0x0/0x1bfc00000, data 0xe4df0e/0xf43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:20.898040+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 18530304 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270079 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:21.898226+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 18530304 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698414000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x558698414000 session 0x5586977a10e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: handle_auth_request added challenge on 0x558698e70000
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.389634132s of 12.289875031s, submitted: 53
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:22.898414+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100671488 unmapped: 18776064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1b9f0c000/0x0/0x1bfc00000, data 0xa6befe/0xb60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,3])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:23.898546+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100671488 unmapped: 18776064 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 ms_handle_reset con 0x558698e70000 session 0x5586969870e0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:24.898695+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:25.898990+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:26.899144+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:27.899315+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:28.899429+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:29.899593+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:30.899817+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:31.900009+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 18767872 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:32.900183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:33.900327+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:34.900507+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:35.900628+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:36.900784+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:37.900945+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:38.901077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:39.901196+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:40.901341+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:41.901494+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:42.901690+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:43.901819+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:44.901991+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:45.902138+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:46.902293+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:47.902476+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:48.902633+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:49.902794+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:50.903014+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:51.903179+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:52.903343+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:53.903523+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:54.903646+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:55.903761+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 18759680 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:56.903912+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:57.904150+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:58.904652+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:16:59.904950+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:00.905170+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:01.905444+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:02.905805+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:03.905935+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:04.906158+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:05.906382+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:06.906617+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:07.906776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:08.906926+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 18751488 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:09.907149+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:11.163377+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:12.163556+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:13.163692+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:14.163926+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:15.164061+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:16.164226+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:17.164364+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:18.164580+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 18743296 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:19.164967+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:20.165123+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:21.165240+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:22.165356+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:23.165483+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:24.165704+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:25.165837+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:26.166041+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:27.166183+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:28.166359+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 18735104 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:29.166502+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:30.166683+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:31.166828+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:32.166967+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:33.167123+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:34.167239+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:35.167361+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:36.167493+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:37.167691+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:38.167845+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:39.167993+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:40.168127+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:41.168340+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:42.168490+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:43.168624+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100720640 unmapped: 18726912 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:44.168773+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:45.168921+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:46.169065+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:47.169206+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:48.169337+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:49.169472+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:50.169611+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:51.169742+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:52.169917+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:53.170089+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:54.170242+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 18718720 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:55.170391+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:56.170520+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:57.170651+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:58.170818+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:17:59.170956+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:00.171091+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:01.171374+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:02.171571+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:03.171718+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 18710528 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:04.171918+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 18702336 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:05.172050+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 18702336 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:06.172165+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:07.172316+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:08.172521+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:09.172650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:10.172795+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:11.172931+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:12.173090+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:13.173241+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:14.173386+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:15.173514+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:16.173651+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 18694144 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:17.173783+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:18.173943+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:19.174064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:20.174206+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:21.174347+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:22.174496+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:23.174641+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:24.175064+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:25.175339+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:26.175592+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:27.175771+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:28.176023+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:29.176216+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 18685952 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:30.176353+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100769792 unmapped: 18677760 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:31.176553+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:32.176797+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:33.177017+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:34.177221+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:35.177423+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:36.177628+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:37.177817+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:38.178036+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:39.178218+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:40.178383+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:41.178569+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:42.178716+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:43.178934+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:44.179105+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:45.179325+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:46.179568+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:47.179718+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:48.179958+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:49.180127+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:50.180345+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:51.180568+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:52.180718+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:53.180955+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:54.181119+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:55.181262+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:56.181476+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:57.181689+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:58.181951+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:18:59.182141+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:00.182310+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3958 syncs, 3.17 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2506 writes, 6110 keys, 2506 commit groups, 1.0 writes per commit group, ingest: 3.25 MB, 0.01 MB/s
                                           Interval WAL: 2506 writes, 1170 syncs, 2.14 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:01.182485+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:02.182699+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:03.182928+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:04.183145+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:05.183275+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 18669568 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:06.183496+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:07.183974+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:08.184450+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:09.184663+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:10.184940+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:11.185103+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:12.185387+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:13.185648+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:14.185855+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:15.186052+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:16.186225+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:17.186367+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:18.186565+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 18661376 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:19.186736+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:20.186877+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:21.187044+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:22.187191+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:23.187378+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:24.187528+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:25.187716+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:26.187867+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:27.188132+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:28.188293+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:29.188466+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:30.188624+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:31.188865+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:32.189180+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 18653184 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:33.189457+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:34.189756+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:35.190000+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:36.190223+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:37.190438+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:38.190705+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:39.190972+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:40.191449+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:41.192195+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:42.192346+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:43.192529+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:44.192763+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:45.193191+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 18644992 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:46.193345+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:47.193544+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:48.193744+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:49.193957+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:50.194111+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:51.194240+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:52.194367+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:53.194505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:54.194712+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:55.194869+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:56.195038+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:57.195180+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 18636800 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:58.195372+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:19:59.195532+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:00.195667+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:01.195801+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:02.195989+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:03.197032+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:04.197249+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:05.197556+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 18620416 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:06.197701+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:07.197977+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:08.198200+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:09.320861+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:10.321053+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:11.321235+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:12.321376+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:13.321602+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:14.321879+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:15.322072+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:16.322218+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:17.322387+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 18612224 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:18.322593+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:19.322732+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:20.322862+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:21.323036+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:22.323202+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:23.323342+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:24.323469+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:25.323585+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ee000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 18604032 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:26.323865+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 18595840 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:27.324097+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239385 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 18595840 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:28.324302+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 18595840 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:29.324446+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 245.986053467s of 247.395355225s, submitted: 18
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 18595840 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:30.324575+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 18587648 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:31.324650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ef000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 18563072 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:32.324790+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1ba2ef000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 18554880 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:33.324945+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 18538496 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:34.325081+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 18530304 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:35.325325+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 18505728 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:36.325485+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 18505728 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:37.325654+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,1])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 18472960 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:38.325843+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 18472960 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:39.325978+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.453606606s of 10.000696182s, submitted: 153
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 18472960 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:40.326135+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 18464768 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:41.326417+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 18464768 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:42.326566+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 18448384 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:43.326704+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101015552 unmapped: 18432000 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:44.326871+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:45.327052+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:46.327193+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:47.327330+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:48.327558+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:49.327697+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:50.327847+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:51.327957+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:52.329101+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 18415616 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:53.329235+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:54.329376+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:55.329500+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:56.329670+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:57.329791+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:58.330058+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:20:59.330849+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:00.330965+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:01.331090+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:02.331212+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:03.331351+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:04.331472+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:05.331617+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:06.331752+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:07.331935+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:08.332083+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:09.332209+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:10.332326+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 18407424 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:11.332455+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:12.332782+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:13.333028+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:14.333173+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:15.333314+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:16.333459+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:17.333626+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:18.333800+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:19.333943+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:20.334077+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:21.334210+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:22.334370+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:23.334494+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:24.334658+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:25.334776+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:26.334912+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:27.335052+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:28.335217+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:29.335371+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:30.335516+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:31.335650+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:32.335798+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:33.335932+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:34.336163+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:35.336332+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:36.336489+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:37.336618+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:38.336801+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:39.336951+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:40.337108+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:41.337235+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:42.337626+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:43.337764+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:44.337982+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:45.338150+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 18399232 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:46.338313+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 18391040 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:47.338436+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 18391040 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:48.339238+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 18391040 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:49.339364+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 18391040 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:50.339515+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:51.339697+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:52.339838+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:53.340023+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:54.340154+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:55.340458+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:56.340598+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:57.340777+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:22:32 compute-0 ceph-osd[84951]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:22:32 compute-0 ceph-osd[84951]: bluestore.MempoolThread(0x558693131b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239209 data_alloc: 218103808 data_used: 5079040
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:58.340942+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:21:59.341046+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 18382848 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'config diff' '{prefix=config diff}'
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'config show' '{prefix=config show}'
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:22:00.341233+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 18063360 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:22:01.341505+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 18243584 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: osd.0 177 heartbeat osd_stat(store_statfs(0x1bb30f000/0x0/0x1bfc00000, data 0xa6be9c/0xb5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: tick
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_tickets
Jan 27 09:22:32 compute-0 ceph-osd[84951]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T09:22:02.341620+0000)
Jan 27 09:22:32 compute-0 ceph-osd[84951]: prioritycache tune_memory target: 4294967296 mapped: 101089280 unmapped: 18358272 heap: 119447552 old mem: 2845415832 new mem: 2845415832
Jan 27 09:22:32 compute-0 ceph-osd[84951]: do_command 'log dump' '{prefix=log dump}'
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18000 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27790 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.27689 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.27698 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.27704 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.27742 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.17952 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3708348845' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3767177986' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2749011361' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.27757 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2780761255' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2424368200' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2520139319' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 27 09:22:32 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1555584008' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27752 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 27 09:22:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853190007' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27802 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:33.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27758 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 27 09:22:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564312606' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:33 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:33 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:33 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:33.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27820 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:33 compute-0 nova_compute[247671]: 2026-01-27 09:22:33.732 247675 INFO nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Instance 621d3dcf-38f5-4e64-af83-bbe492683b16 has allocations against this compute host but is not found in the database.
Jan 27 09:22:33 compute-0 nova_compute[247671]: 2026-01-27 09:22:33.732 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 09:22:33 compute-0 nova_compute[247671]: 2026-01-27 09:22:33.732 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:33 compute-0 nova_compute[247671]: 2026-01-27 09:22:33.782 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 09:22:33 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27773 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:33 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 27 09:22:33 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1441268320' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27835 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18051 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27785 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 27 09:22:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542761759' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 27 09:22:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2931256400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:34 compute-0 crontab[289282]: (root) LIST (root)
Jan 27 09:22:34 compute-0 nova_compute[247671]: 2026-01-27 09:22:34.452 247675 DEBUG oslo_concurrency.processutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 09:22:34 compute-0 nova_compute[247671]: 2026-01-27 09:22:34.459 247675 DEBUG nova.compute.provider_tree [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed in ProviderTree for provider: 083cbb1c-f2d4-4883-a91d-8697c4453517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 09:22:34 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27847 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 nova_compute[247671]: 2026-01-27 09:22:34.544 247675 DEBUG nova.scheduler.client.report [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Inventory has not changed for provider 083cbb1c-f2d4-4883-a91d-8697c4453517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 09:22:34 compute-0 nova_compute[247671]: 2026-01-27 09:22:34.546 247675 DEBUG nova.compute.resource_tracker [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 09:22:34 compute-0 nova_compute[247671]: 2026-01-27 09:22:34.547 247675 DEBUG oslo_concurrency.lockutils [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 09:22:34 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18063 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.17973 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.27769 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.27740 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.18000 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.27790 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.27752 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3439514285' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2853190007' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3241736887' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/329833587' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/257954465' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2564312606' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1441268320' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27865 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:34 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 27 09:22:34 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096891045' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 27 09:22:35 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:35 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18081 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:35 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27874 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:35 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27806 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:35 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:35.173+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:35 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:35.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:35 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27889 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:35 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:35 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:35 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:35.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:35 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 27 09:22:35 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580127636' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.18012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27802 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27758 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27820 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.18033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27773 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1348432722' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27835 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.18051 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27785 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/200875247' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3542761759' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2931256400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27847 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.18063 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27865 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3953285462' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4096891045' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/880276851' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.18081 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27874 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.27806 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1018624648' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4032263268' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/187552521' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18111 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:36.306+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:36 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:36 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27919 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-281e9bde-2795-59f4-98ac-90cf5b49a2de-mgr-compute-0-vujqxq[74646]: 2026-01-27T09:22:36.312+0000 7fe0fd675640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:36 compute-0 ceph-mgr[74650]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:22:36 compute-0 sudo[289471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:36 compute-0 sudo[289471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:36 compute-0 sudo[289471]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:36 compute-0 sudo[289497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 27 09:22:36 compute-0 sudo[289497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 09:22:36 compute-0 sudo[289497]: pam_unix(sudo:session): session closed for user root
Jan 27 09:22:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 27 09:22:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915383182' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 27 09:22:36 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 27 09:22:36 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1656215778' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 27 09:22:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132966199' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 27 09:22:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684583258' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.27889 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3656288821' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3580127636' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4040902856' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3886125292' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2221450783' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/426410401' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/915383182' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4008723180' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1937641089' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1416407879' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1656215778' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1833768163' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3132966199' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 27 09:22:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2208005674' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 27 09:22:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:37.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:37 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:37 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:37 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:37.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 27 09:22:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315785763' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 27 09:22:37 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 27 09:22:37 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363429746' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 27 09:22:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358917229' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 27 09:22:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223038945' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 27 09:22:38 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 09:22:38 compute-0 systemd[1]: Started Hostname Service.
Jan 27 09:22:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 27 09:22:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966075564' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 27 09:22:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2371639653' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.18111 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.27919 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2599507766' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/4028550549' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1245439390' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/684583258' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3405554498' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2208005674' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3315619037' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1566630956' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2315785763' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1153478924' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3363429746' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1244028130' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1942623641' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3705683954' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3920415025' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2358917229' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/4223038945' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/1889698516' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2892749358' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27917 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 27 09:22:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696460285' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 09:22:38 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 27 09:22:38 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202204909' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27932 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27926 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 27 09:22:39 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480329827' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 27 09:22:39 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160291347' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:39.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:39 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:39 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:39 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:39.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3357341063' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3966075564' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2371639653' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1718812705' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.27917 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4189652473' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2696460285' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/202204909' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2578474620' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2145166791' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.27932 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.27926 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/480329827' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3912296272' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1160291347' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2197387027' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18240 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28018 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28027 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:39 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18249 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28036 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28042 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18258 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18270 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28048 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27983 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18285 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28075 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:40 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.27998 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:41 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:41 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28087 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:41 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28010 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:41.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:41 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:41 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:41 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:41.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:41 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28105 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:41 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28117 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 nova_compute[247671]: 2026-01-27 09:22:42.547 247675 DEBUG oslo_service.periodic_task [None req-54070e5e-d0b2-4948-a2e6-10ca2e1fd572 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 09:22:42 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.18240 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.28018 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.28027 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.18249 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2623989648' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.28036 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.28042 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.18258 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.18270 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.28048 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2595423365' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:42 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:43 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18309 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 27 09:22:43 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89835645' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:43 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 27 09:22:43 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301222407' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 27 09:22:43 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:43 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:43 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:43.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:43 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18354 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.28057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.27983 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.18285 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.28075 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.27998 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2085086516' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1978892460' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.28087 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2425106243' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.28010 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2220821825' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.28105 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/439700164' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.28117 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3785376064' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.18309 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/89835645' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/301222407' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:43 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 27 09:22:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:22:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:22:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/995857130' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 27 09:22:44 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28082 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:44 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28210 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:44 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28088 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:44 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 27 09:22:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735644824' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 27 09:22:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:44 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:45 compute-0 ceph-mgr[74650]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.18339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.18354 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3397952813' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2138807079' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/995857130' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.28210 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/2735644824' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3527853137' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:45 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:45.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:45 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:45 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:45 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:45.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:45 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 27 09:22:45 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/819666309' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28261 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:46 compute-0 podman[290688]: 2026-01-27 09:22:46.273698219 +0000 UTC m=+0.085033551 container health_status 4f798b044efe2a4e0d9941ed7f64efb1926ccac56921f710014e5cda49c21ea3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '249f42cc0a5de6940e06c976a81a3e64ae1c330f940cff0a51730e3f74af51fa-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d-219e626b612aa49ffa2f558eca5c4f007ba4fcbb92c587a73d62b3e7accfd92d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 09:22:46 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18438 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.28088 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/3868469963' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:46 compute-0 ceph-mon[74357]: pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3390650037' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/979550844' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/2407732034' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/4024528426' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/544546443' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/819666309' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 27 09:22:46 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901504378' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 27 09:22:46 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28136 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 27 09:22:47 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359795015' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 27 09:22:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 27 09:22:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:47.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 27 09:22:47 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28285 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/2375603237' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.28261 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.18438 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/1985909566' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3901504378' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.28136 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.102:0/3240704797' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/359795015' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 27 09:22:47 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:47 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:47 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:47.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 27 09:22:47 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990591749' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 27 09:22:47 compute-0 ceph-mon[74357]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 27 09:22:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277446980' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 27 09:22:48 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.18474 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:48 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 27 09:22:48 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653423506' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 27 09:22:49 compute-0 ceph-mgr[74650]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 21 GiB / 21 GiB avail
Jan 27 09:22:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.100 - anonymous [27/Jan/2026:09:22:49.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.101:0/949667885' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 27 09:22:49 compute-0 ceph-mon[74357]: from='client.28285 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/1990591749' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 27 09:22:49 compute-0 ceph-mon[74357]: from='client.? 192.168.122.100:0/3277446980' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 27 09:22:49 compute-0 radosgw[92542]: ====== starting new request req=0x7f84d5e106f0 =====
Jan 27 09:22:49 compute-0 radosgw[92542]: ====== req done req=0x7f84d5e106f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 27 09:22:49 compute-0 radosgw[92542]: beast: 0x7f84d5e106f0: 192.168.122.102 - anonymous [27/Jan/2026:09:22:49.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 27 09:22:49 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28157 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:22:49 compute-0 ceph-mon[74357]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 27 09:22:49 compute-0 ceph-mon[74357]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674544824' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 27 09:22:49 compute-0 ceph-mgr[74650]: log_channel(audit) log [DBG] : from='client.28300 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
